scispace - formally typeset
Search or ask a question

Showing papers on "Feature vector published in 1980"


Journal ArticleDOI
TL;DR: The results suggest a new approach to dynamic time warping for isolated words in which both the reference and test patterns are linearly warped to a fixed length, and then a simplified dynamic time Warping algorithm is used to handle the nonlinear component of the time alignment.
Abstract: The technique of dynamic programming for the time registration of a reference and a test pattern has found widespread use in the area of isolated word recognition. Recently, a number of variations on the basic time warping algorithm have been proposed by Sakoe and Chiba, and Rabiner, Rosenberg, and Levinson. These algorithms all assume that the test input is the time pattern of a feature vector from an isolated word whose endpoints are known (at least approximately). The major differences in the methods are the global path constraints (i.e., the region of possible warping paths), the local continuity constraints on the path, and the distance weighting and normalization used to give the overall minimum distance. The purpose of this investigation is to study the effects of such variations on the performance of different dynamic time warping algorithms for a realistic speech database. The performance measures that were used include: speed of operation, memory requirements, and recognition accuracy. The results show that both axis orientation and relative length of the reference and the test patterns are important factors in recognition accuracy. Our results suggest a new approach to dynamic time warping for isolated words in which both the reference and test patterns are linearly warped to a fixed length, and then a simplified dynamic time warping algorithm is used to handle the nonlinear component of the time alignment. Results with this new algorithm show performance comparable to or better than that of all other dynamic time warping algorithms that were studied.

618 citations


Journal ArticleDOI
TL;DR: This paper presents a technique for normalizing Fourier descriptors which retains all shape information, and is computationally efficient, and combined with certain others relating to accurate contour representation in a complete three-dimensional aircraft recognition algorithm.

368 citations


Journal ArticleDOI
TL;DR: A versatile technique for designing computer algorithms for separating multiple-dimensional data (feature vectors) into two classes, referred to as classifiers, that achieve nearly Bayes-minimum error rates while requiring relatively small amounts of memory.
Abstract: We describe a versatile technique for designing computer algorithms for separating multiple-dimensional data (feature vectors) into two classes. We refer to these algorithms as classifiers. Our classifiers achieve nearly Bayes-minimum error rates while requiring relatively small amounts of memory. Our design procedure finds a set of close-opposed pairs of clusters of data. From these pairs the procedure generates a piecewise-linear approximation of the Bayes-optimum decision surface. A window training procedure on each linear segment of the approximation provides great flexibility of design over a wide range of class densities. The data consumed in the training of each segment are restricted to just those data lying near that segment, which makes possible the construction of efficient data bases for the training process. Interactive simplification of the classifier is facilitated by an adjacency matrix and an incidence matrix. The adjacency matrix describes the interrelationships of the linear segments {£i}. The incidence matrix describes the interrelationships among the polyhedrons formed by the hyperplanes containing {£i}. We exploit switching theory to minimize the decision logic.

91 citations


Journal ArticleDOI
TL;DR: The method of three-dimensional shape analysis using normalized Fourier descriptors is information preserving, yet is as fast as previous suboptimum methods, and it is seen that real-time implementation of the method is feasible.
Abstract: Recent improvements in Fourier descriptor (FD) shape analysis enable rapid identification of three-dimensional objects using FD feature vectors derived from their boundaries. In three-dimensional shape analysis, it is essential to preserve all information to achieve good performance. In the real-time situation it is, of course, equally important to use a computationally efficient method. The method of three-dimensional shape analysis using normalized Fourier descriptors is information preserving, yet is as fast as previous suboptimum methods. In addition, the feature vector has a linear property, allowing to interpolate between library projections and effectively define a continuum of library projections rather than a finite set. This method is applied to the analysis of sequential data varying in resolution and orientation relative to the camera. Computational considerations are discussed, and it is seen that real-time implementation of the method is feasible.

79 citations


Journal ArticleDOI
Youji Fukada1
TL;DR: Two clustering procedures for region analysis of image data are described and the security of these algorithms theoretically and examples are presented in order to show how these algorithms work for real image data.

44 citations


Patent
27 May 1980
TL;DR: In this paper, a similarity measure set is defined for a set of fragmentary patterns A(u, m) defined by a common end point m and start points u's predetermined relative to the end-point m. Recurrence values are calculated according to a recurrence formula for each endpoint m, rather than for each fragmentary pattern set.
Abstract: A similarity calculator for calculating a set of similarity measures S(A(u, m), B c )'s according to the technique of dynamic programming comprises an input pattern buffer for successively producing input pattern feature vectors of an input pattern A to be pattern matched with reference patterns B c , an m-th input pattern feature vector a m at a time. The similarity measure set is for a set of fragmentary patterns A(u, m)'s defined by a common end point m and start points u's predetermined relative to the end point m. Scalar products (a m ·b j n ) are calculated between the m-th input pattern feature vector and reference pattern feature vectors b j n of an n-th reference pattern B n and stored in a scalar product buffer. Recurrence values are calculated according to a recurrence formula for each end point m, rather than for each fragmentary pattern set, and for each reference pattern B n to provide a similarity measure subset S(A(u, m), B n )'s, with a recurrence value for each reference pattern feature vector b v calculated by the use of the scalar product (a m ·b v ) and recurrence values calculated for a previous end point (m-1) and for at least three consecutive reference pattern feature vectors preselected relative to that reference pattern feature vector b v . Instead of the scalar product, it is possible to use any one of other measures representative of a similarity or a dissimilarity between an input pattern feature vector and a reference pattern feature vector.

38 citations


Journal ArticleDOI
TL;DR: In segmenting an image by pixel classification, the sequence of gray levels of the pixel's neighbors can be used as a feature vector that yields classifications at least as good as those obtained using other local properties as features.

34 citations


PatentDOI
TL;DR: In this paper, the array formed by a timewise sequence of speech signal feature vectors includes digital data at each time slot representing both presence/absence and consistency of occurence.
Abstract: In this speech recognition system the array formed by a timewise sequence of speech signal feature vectors includes digital data at each time slot representing both presence/absence and consistency of occurence.

32 citations


Journal ArticleDOI
TL;DR: It is emphasized that the interactive pattern recognition system ISPAHAN is well suited to find optimal decision functions based on mappings, and two families of new mapping algorithms are defined.

28 citations


Journal ArticleDOI
01 Jul 1980
TL;DR: The pattern trajectory approach appears to offer advantages in the analysis of complex nonstationary data sets where conventional time series techniques are insufficient and time-dependent clustering provides a means to identify a composite source model.
Abstract: Multivariate data sets with dependency between observations are described using a feature space representation. The resulting ordered set of points in feature space is termed the pattern trajectory. A set of descriptors of the pattern trajectory has been developed. Time-dependent clusters and transition segments form the basic structural description from which both lower level properties, e.g., cluster position, cluster dispersion, transition rate, and higher level properties, e.g., rebound, periodicity, finite state model, may be derived. Two algorithms have been developed for time-dependent cluster analysis. The time-weighted minimum spanning tree (TWMST) algorithm utilizes a composite space-time distance measure and creates clusters by cutting the longest tree branches. The time-dependent Isodata (TD-ISODATA) algorithm utilizes a global clustering to initiate the segmentation into timedependent cluster cores and transition segments. Examples of the applica tion of these algorithms to nonstationary neuronal spike train data and to simulated animal migration data are described. The pattern trajectory approach appears to offer advantages in the analysis of complex nonstationary data sets where conventional time series techniques are insufficient. Time-dependent clustering provides a means to identify a composite source model.

21 citations


Journal ArticleDOI
TL;DR: An adaptive model for computer recognition of vowel sounds with the first three formants as features using a single pattern training procedure for self-supervised learning and maximum value of fuzzy membership function is the basis of recognition.

Proceedings ArticleDOI
M. Smith1
01 Dec 1980
TL;DR: This approach to multitarget tracking is from the perspective of an unsupervised pattern recognition problem, for the direction and location of the aggregate of tracts is located, and the individual tracks within the group can be identified by proven techniques.
Abstract: This approach to multitarget tracking is from the perspective of an unsupervised pattern recognition problem. In the usual formulation the target data are processed to produce feature points, which may be track points. The feature points that correspond to the same track are identified when mapped as a dense region in the parameter space. Parameter space representation of crossing and nearly coincident tracks forms overlapping clusters that are not separable. This factor makes the transform approach in its usual form unworkable. Nevertheless, the transform technique need not be abandoned, for the diameter of the aggregate of tracts is evident in the parameter space. Once the direction and location of the aggregate of tracts is located, the individual tracks within the group can be identified by proven techniques.

Journal Article
TL;DR: Simple numerical examples for bivariate feature vectors are worked out to demonstrate the approaches to classification on the basis of a probability density and nonlinear decision boundaries.
Abstract: When observed data have to be assigned to one or another category, classification rules are needed. Linear discriminant functions provide easily computed rules; weighing the discriminat function according to the variances in the data sets helps reduce classification errors. Classification on the basis of a probability density involves nonlinear decision boundaries. Simple numerical examples for bivariate feature vectors are worked out to demonstrate these approaches to classification.

Patent
03 Jun 1980
TL;DR: In this paper, a code book storage and fuzzy vector quantization are used to improve recognizing performance by making a code vector suitable to an input voice, and a HMM storage is used to store HMM in which occurrence probability of a label defined is defined for each state, a feature vector group occurrence rate calculating means which calculates occurrence rate from the HMM of the feature vector groups by label occurrence probability and an attribution factor vector, and code book correcting means 408 which corrects each code vector.
Abstract: PURPOSE:To improve recognizing performance by making a code vector suitable to an input voice. CONSTITUTION:This device is provided with a code book storage means 406, a fuzzy vector quantization means 405 which converts each vector of a feature vector group to a group (vector of attribution factor) corresponding to each label by a code book and converts the feature vector group to an attribution factor vector group, a HMM storage means 407 which stores HMM in which occurrence probability of a label defined is defined for each state, a feature vector group occurrence rate calculating means which calculates occurrence rate from the HMM of the feature vector group by label occurrence probability and an attribution factor vector, and a code book correcting means 408 which corrects each code vector.

Journal ArticleDOI
TL;DR: An automated technique is presented which employs the systems identification properties of the digital inverse filter (IF) for the classification and assessment of laryngeal dysfunction.
Abstract: An automated technique is presented which employs the systems identification properties of the digital inverse filter (IF) [8] for the classification and assessment of laryngeal dysfunction. The information is contained in the positions of the IF polynomial zeros in the complex plane as the IF is computed repeatedly over small analysis segments of a speech sample. A graphic display of the z-plane roots and a vector of pattern features of that display result for each case. The vectors are then processed by an automated clustering procedure to classify the cases in the feature space. The results of the analysis of a large test battery of acoustically degraded synthetic vowel sounds using the IF method are presented.

Journal ArticleDOI
TL;DR: Pictures of phytoplankton samples were analyzed as raster images by means of a television camera and a Robotron 4200 computer and each of the five genera involved were identifiable by a characteristic point cluster in a p-dimensional feature space.
Abstract: Pictures of phytoplankton samples were analyzed as raster images by means of a television camera and a Robotron 4200 computer. A feature vector described the objects irrespective of their angle. Each of the five genera involved were identifiable by a characteristic point cluster in a p-dimensional feature space. A learning method was used during development of the classification structure, and the quality of identification was increased incrementally to the greatest possible degree. Asterionella formosa was identified in all cases without error despite the relatively coarse scanning grid. Errors in the identification of Fragilaria crotonensis can be reduced by improving the resolution (over 100 picture elements per colony).

Patent
28 Oct 1980
TL;DR: In this article, the authors proposed to improve the precision of inter-pattern similarity by adding the function of evaluating the variation value of a feature vector to the time normalizing algorithm in the time-normalizing matching method.
Abstract: PURPOSE:To improve the calculation precision of inter-pattern similarity, by adding the function of evaluating the variation value of a feature vector to the time-normalizing algorithm in the time-normalizing matching method. CONSTITUTION:In input pattern buffer 21, pattern A is input and held and in standard pattern memory 22, standard pattern B is previously stored. From buffer 21 and memory 22, feature vectors ai and bi of patterns A and B are output in sequence and then input to shift registers 30 and 31 of a similarity calculation part. Then, dividing circuits 32 and 33 calculate variation values of feature vectors ai and ai-1, and bi and bi-1 output from registers 30 and 31 and distance calculating circuit 34 calculates the distance between two variation values. Distance calculating circuit 35, on the other hand, finds the distance between vectors ai and bi and constant circuit 36 multiplies the output of circuit 35 by a constant. The outputs of circuits 34 and 36 are input to adding circuit 37 and integral register 38 to calculate similarity d(i,j) between vectors ai and bi, and similarity (d) is supplied to recurrence formula calculation part 24 to find an integral value.

Patent
22 Feb 1980
TL;DR: In this article, the authors propose to fix a standard pattern which is the most similar pattern to an input pattern through the comparative approval of the criterion of similarity calculated between the input pattern from each part and the standard pattern.
Abstract: PURPOSE:To reduce the volume of calculation while maintaiing time normalization performance, by fixing a standard pattern which is the most similar pattern to an input pattern through the comparative approval of the criterion of similarity calculated between the input pattern from each part and the standard pattern. CONSTITUTION:In input pattern buffer 11, an input pattern is held which expresses a time series of feature vectors and in standard pattern memory part 12, a standard pattern is stored which expresses a time series of feature vectors. In this memory part 12, a specific dimension or assignment information on the dimension group is further stored being classified by categories, and the outputs of buffer 11 and memory part 12 are supplied to time normalization part 13, which detertines time- axis coordinate where the time variation pattern of the specific dimension or dimension group of the standard pattern appropriates to the time variation pattern of the dimension of the input pattern. The output of this normalization part 13 is applied to nonlinear matching part 15, where the criterion of the similarity between feature vector serieses of both the pattern is approved by comparing, so that decision part 16 will make a decision on the most similar standard pattern to the input pattern.

Patent
17 Jun 1980
TL;DR: In this paper, the feature vector that the recognition system produces for each character on a document is normalized to unit length and projected onto a set of eigenvectors, each set corresponding to a particular character font.
Abstract: A method and apparatus for dynamically switching between fonts or groups of fonts in an automatic character recognition system (10). The feature vector that the recognition system produces for each character on a document is normalized to unit length (24) and projected onto a set of eigenvectors (28). A memory (26) in the character recognition system stores several sets of eigenvectors, each set corresponding to a particular character font. A switch character which is included between a first and second group of characters triggers the selection of the appropriate set of eigenvectors for the succeeding font.

Proceedings ArticleDOI
23 Dec 1980
TL;DR: This paper will describe those optical operations which are applicable for conditioning data for the ALN process, and present an example of how this hybrid approach can be used.
Abstract: Up to this point, optical information processing system designers have taken an end-to-end approach to pattern recognition and feature extraction from imagery. The majority of algorithms that have been proposed for this category of problems have used a completely optical solution in which only the signal conditioning and detection have been implemented by other means. It is the purpose of this paper to present an alternate approach in which optical methods are used to provide image features for an adaptive learning network [1] (ALN). The result of the ALN design process is to specify a functional mapping between input feature space and a set of response variables that can be interpreted as indicators of particular processes associated with the input data. This mathematical mapping can then be reduced to hardware and made to perform on real time inputs. In the case of recognizing patterns in the field of view of airborne sensors, certain useful features of the input image such as the spatial frequency content or optical moment information have been discarded as inputs for the ALN due to computational complexity and the packaging constraints of an aerial platform. It is here that the speed, resolution and potential compactness of coherent optical methods can be used to extract features from input images that otherwise would not be feasible. This paper will describe those optical operations which are applicable for conditioning data for the ALN process, and present an example of how this hybrid approach can be used.© (1980) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
23 Dec 1980
TL;DR: MACHAL is applied to a low order feature matching in a relatively sparse feature space, characteristic of terminal homing problems, and a probability model for the algorithm is developed and its validity tested by Monte Carlo simulation.
Abstract: Target recognition in the terminal homing scenario consists of matching a set of sensed features with a set of reference features in the prestored reference feature map. An efficient feature matching algorithm, MACHAL, is described. This iterative algorithm employs clustering of feature metrics rather than exhaustive correlation calculations between reference and sensed features. Clustering, followed by data thinning, quickly reduces both reference and sensed data sets and thereby reduces the computational burden. MACHAL is a general algorithm which is capable of matching feature vectors of arbitrary dimension. The computational requirements increase with the dimension of the feature space and with thE increasing number of feature vectors in the sensed and reference feature sets. In this paper, MACHAL is applied to a low order feature matching in a relatively sparse feature space, characteristic of terminal homing problems. A probability model for the algorithm is developed and its validity tested by Monte Carlo simulation. Upper bounds for the clustering threshold and for the noise variance are developed using the probability model. The performance of the algorithm is evaluated by assessment of match accuracy, and robustness to noise resulting from typical sets of sensed and reference scenes. The application of MACHAL to higher order feature space is demonstrated.© (1980) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
10 Apr 1980
TL;DR: In this article, two complementary features included in character patterns and performing multi-stage collation processing considering deformation of characters caused by difference of font are presented. But the results of the collation circuit are only used to select candidate categories for finer classification.
Abstract: PURPOSE:To classify and process character patterns efficiently with a high precision by extracting two complementary features included in character patterns and performing the multi-stage collation processing considering deformation of characters caused by difference of font. CONSTITUTION:Input character 1 is processed by outside contact frame detection circuit 4 and is inputted to storage circuit 5 after being read by scanning-type photoelectric converter 2. The output of storage circuit 5 has complemetary features of white and black extracted by white and black pattern generation circuits 9 and 8 in classification unit 6, and they are applied to collation circuits 10 and 14 successively and are compared with contents of feature pattern tables 11 and 12 of white and black of a known character, and classification result 7 is obtained. Collation circuit 10 compares a small number of feature vectors with one another to select candidate category 13 for finer classification in the next stage, and collation circuit 14 compares all vectors with one another in respect to this category.

Journal ArticleDOI
TL;DR: This study presents pattern recognition experiments of the electroencephalogram using the BAYES classifier and the MAHALANOBIS classifier, which is implemented by means of an averaged covariance matrix.
Abstract: This study presents pattern recognition experiments of the electroencephalogram. The components of the feature vector are built up by Parcor coefficients which provide a simple structure of the covariance matrix. The BAYES classifier is implemented which is theoretically best in minimizing the error rate. The MAHALANOBIS classifier is used too by means of an averaged covariance matrix. The performance of the classifier is tested in experiment by computing the error rate.

Journal ArticleDOI
TL;DR: A method is developed for choosing the dimensionality of the patterns using the axes of the feature space as the eigenvectors of matrices of the form R 2 −1 R 1 where R 1 and R 2 are real symmetric matrices.
Abstract: This paper considers the problem of selection of dimensionality and sample size for feature extraction in pattern recognition. In general, the axes of the feature space are selected as the eigenvectors of matrices of the form R 2 −1 R 1 where R 1 and R 2 are real symmetric matrices. Expressions are derived for obtaining the changes in the eigenvalues and eigenvectors when there are changes of first order of smallness in the matrices R 1 and R 2. Based on this theory, a method is developed for choosing the dimensionality of the patterns. Also expressions are derived for the selection of sample size for estimating the eigenvectors, for two gaussian distributed pattern classes with equal means, unequal covariance matrices and with unequal means and equal covariance matrices.