scispace - formally typeset
Search or ask a question

Showing papers on "Feature vector published in 1985"


Journal ArticleDOI
TL;DR: This paper proposes and analyzes a waveform coding system, adaptive vector predictive coding (AVPC), in which a low-dimensional vector quantizer is used in an adaptive predictive coding scheme.
Abstract: Vector quantization, in its simplest form, may be regarded as a generalization of PCM (independent quantization of each sample of a waveform) to what might be called "vector PCM," where a block of consecutive samples, a vector, is simultaneously quantized as one unit. In theory, a performance arbitrarily close to the ultimate rate-distortion limit is achievable with waveform vector quantization if the dimension of the vector, k , is large enough. The main obstacle in effectively using vector quantization is complexity. A vector quantizer of dimension k operating at a rate of r bits/sample requires a number of computations on the order of k2^{kr} and a memory of the same order. However, a low-dimensional vector quantizer (dimensions 4-8) achieves a remarkable improvement over scalar quantization (PCM). Consequently, using the vector quantizer as a building block and imbedding it with other waveform data compression techniques may lead to the development of a new and powerful class of waveform coding systems. This paper proposes and analyzes a waveform coding system, adaptive vector predictive coding (AVPC), in which a low-dimensionality vector quantizer is used in an adaptive predictive coding scheme. In the encoding process, a locally generated prediction of the current input vector is subtracted from the current vector, and the resulting error vector is coded by a vector quantizer. Each frame consisting of many vectors is classified into one of m statistical types. This classification determines which one of m fixed predictors and of m vector quantizers will be used for encoding the current frame.

295 citations


Journal ArticleDOI
TL;DR: A statistic for determining the number of different textures in the image is developed and demonstrated and a theory regarding the information processing strategies in human vision motivates the development of a texture feature space.

245 citations


Journal ArticleDOI
TL;DR: The new discriminant analysis with orthonormal coordinate axes of the feature space is proposed, which is more powerful than the traditional one in so far as the discriminatory power and the mean error probability for coordinate axes are concerned.

153 citations


Proceedings ArticleDOI
19 Dec 1985
TL;DR: Measurement of spacing has been made by application of the Hough transform technique to detect the instance of a circular shape and an ellipsoidal shape which approximate the perimeter of the iris and both the perimeter the sclera and the shape of the region below the eyebrows respectively.
Abstract: Few approaches to automated facial recognition have employed geometric measurement of characteristic features of a human face. Eye spacing measurement has been identified as an important step in achieving this goal. Measurement of spacing has been made by application of the Hough transform technique to detect the instance of a circular shape and of an ellipsoidal shape which approximate the perimeter of the iris and both the perimeter of the sclera and the shape of the region below the eyebrows respectively. Both gradient magnitude and gradient direction were used to handle the noise contaminating the feature space. Results of this application indicate that measurement of the spacing by detection of the iris is the most accurate of these three methods with measurement by detection of the position of the eyebrows the least accurate. However, measurement by detection of the eyebrows' position is the least constrained method. Application of these techniques has led to measurement of a characteristic feature of the human face with sufficient accuracy to merit later inclusion in a full package for automated facial recognition.

106 citations


Journal ArticleDOI
TL;DR: The rectangular influence graph is presented as an extension of the relative neighborhood graph (RNG) and it is shown that the RIG is a superset of the Gabriel graph with respect to any Minkowski metric.

52 citations


Journal ArticleDOI
TL;DR: The focus is on techniques that will enable the location of the fault and involves the analysis of the instantaneous angular velocity of the flywheel, which is computationally more complex than the other approaches.
Abstract: Several studies have been performed to detect faults in engines. Fourier series and autocorrelation-based methods have been shown to be useful for this purpose. However, these and other methods discussed in the literature cannot locate the fault. In this paper, the focus is on techniques that will enable the location of the fault. In general, our approach involves the analysis of the instantaneous angular velocity of the flywheel. Three methods of analysis are presented. The first method depends on the computation of a set of statistical correlations. The second method is based on evaluation of similarity measures. These methods are able to locate faults in several tests that have been performed. The third approach uses pattern recognition methods and involves three stages?data extraction, functional approximation to determine a feature vector, and classification based on a Bayesian approach. This method is computationally more complex than the other approaches. However, on the basis of the experimental results it appears that the third method leads to a lower error rate. Cases involving faults in one and two cylinders are presented.

51 citations


PatentDOI
TL;DR: It becomes possible to prepare highly reliable reference pattern vectors in an easy manner from a small number of speech patterns, which makes it possible to achieve an improvement in the speech recognition factor.
Abstract: The learning method of reference pattern vectors for speech recognition in accordance with the present invention, a plurality of speech feature vectors are generated from the time series of speech feature parameter for the input speech pattern, by taking into account knowledge concerning the variation tendencies of the speech patterns, and the learning (preparation) of reference pattern vectors for speech recognition is carried out by the use of these speech feature vectors thus generated. Therefore, it becomes possible to prepare highly reliable reference pattern vectors in an easy manner from a small number of speech patterns, which makes it possible to achieve an improvement in the speech recognition factor. In particular, it becomes possible to plan an easy improvement of the reference pattern vectors by an effective use of a relatively small number of input speech patterns.

41 citations


Patent
10 Oct 1985
TL;DR: In this paper, a method for forming feature vectors representing the pixels contained in a pattern desired to be recognized, and reference patterns was provided for defining the aspect ratio of the pattern.
Abstract: A method is provided for forming feature vectors representing the pixels contained in a pattern desired to be recognized, and reference patterns. One part of the feature vector is representative of the pixels contained in the pattern itself, while not requiring a very large feature vector which exactly defines each pixel of the pattern. One embodiment of this invention provides that another part of the feature vector, consisting of one or more bytes of the feature vector, defines the aspect ratio of the pattern. In one embodiment, each byte of the feature vector representing the pixels contained in the character represents the relative ratio of black pixels to total pixels in a specific area of the character; other functions relating input matrix and output feature vector information can be used. In one embodiment of this invention, those areas of the character which are defined by the feature vector together cover the entire character, providing a feature vector describing what might loosely be thought as a "blurred" version of the pattern.

34 citations


Patent
23 Jul 1985
TL;DR: In this paper, the deviation pattern is defined as the deviation from the reference of repeated utterances of the reference speakers or the specified speaker, which is a measure of the similarity of the input and reference patterns, according to one of several possible distance formulae.
Abstract: A pattern recognition apparatus for recognizing spoken words of a nonspecific speaker or of a specific speaker. A reference pattern composed of a sequence of feature vectors, each composed of n feature parameters, bi, is stored. The reference pattern represents a form of average of said words to be recognized as determined by multiple reference speakers speaking the same words or by the specified speaker speaking said words several times. A deviation pattern composed of a sequence of feature vectors composed of n feature parameters, wi /2, is stored. The deviation pattern is a measure of the deviation from the reference of the repeated utterances of the reference speakers or the specified speaker. An input pattern, representing the utterances of a speaker, is composed of a sequence of feature vectors, each composed of n feature parameters, ai, and is stored. A measure of the similarity of the input and reference patterns is calculated, taking into account the deviation pattern, according to one of several possible distance formulae. Basically, a distance parameter calculated for each corresponding input, reference and deviation parameter is set to zero value if the input parameter is inside the deviation range of the reference parameter, and is otherwise calculated to be a finite value.

32 citations


Proceedings ArticleDOI
12 Jun 1985
TL;DR: Diffraction pattern sampling provides a feature space suitable for object classification, orientation and inspection that allows significant dimensionality reduction and can be realized with considerable flexibility, reduced size and improved performance by the use of computer generated holograms.
Abstract: Diffraction pattern sampling provides a feature space suitable for object classification, orientation and inspection. It allows significant dimensionality reduction. These properties are best achieved by the use of specifically-shaped Fourier transform plane detector elements and this can be realized with considerable flexibility, reduced size and improved performance by the use of computer generated holograms.

29 citations


01 Nov 1985
TL;DR: In this article, Fisher's linear discriminant was combined with the Fukunaga-Koontz transform to give a useful technique for reduction f feature space from many to two or three dimensions.
Abstract: : This Memorandum describes how Fisher's Linear Discriminant can be combined with the Fukunaga-Koontz transform to give a useful technique for reduction f feature space from many to two or three dimensions. Performance is seen to be superior in general to the Foley-Sammon extension to fisher's method. The technique is then extended to show how a new radius vector (or pair of radius vectors) can be combined with fisher's vector to produce a classifier with even more power of discrimination. Illustrations of the technique show that good discrimination can be obtained even if there is considerable overlap of classes in any single projection. Keywords include: Index Terms; Dimensionality reduction, Discriminant vectors, Feature selection, Fisher criterion, Linear transformations, Separability. (Great Britain)

Journal ArticleDOI
TL;DR: The binary tree, quadtree, and octree decomposition techniques are reexamined for pattern recognition and shape analysis applications and it has been shown that the quadtree andOctree techniques can be used to find the shape hull of a set of points in space while their n-dimensional generalization can be use for divisive hierarchical clustering.
Abstract: The binary tree, quadtree, and octree decomposition techniques are widely used in computer graphics and image processing problems. Here, the techniques are reexamined for pattern recognition and shape analysis applications. It has been shown that the quadtree and octree techniques can be used to find the shape hull of a set of points in space while their n-dimensional generalization can be used for divisive hierarchical clustering. Similarly, an n-dimensional binary tree decomposition of feature space can be used for efficient pattern classifier design. Illustrative examples are presented to show the usefulness and efficiency of these hierarchical decomposition techniques.

Journal ArticleDOI
TL;DR: In this article, a method was devised to accomplish contrast-invariant pattern recognition using multiple circular harmonic components, which is shift and rotation invariant and can detect targets with various contrasts in the presence of high-contrast objects.
Abstract: A method has been devised to accomplish contrast-invariant pattern recognition using multiple circular harmonic components. In addition to detecting targets with various contrasts in the presence of high-contrast objects, the method is shift- and rotation-invariant. A vector f is formed from the autocorrelation values for each member of a set of circular harmonic components corresponding to the target of interest. The unit vector f/f is a feature vector whose direction characterizes the target. Target detection is accomplished by comparing the corresponding cross-correlation unit vector to the vector f/f. Experimental results are shown.

Journal ArticleDOI
01 Jul 1985
TL;DR: A new concept for examining shapes as vectors in a shape space is described, and two theorems essential to the process of comparing partial shapes to the complete shape are stated and proved.
Abstract: A new concept for examining shapes as vectors in a shape space is described. The shape space is defined in terms of its properties and the importance of the independence of the size variable to the shape vectors. Also, two theorems essential to the process of comparing partial shapes to the complete shape are stated and proved. A new method for detecting the points on a shape that appear to dominate visual perception is described. This method, called the adaptive line of sight method, detects the dominant points on a shape even though they do not always occur on points of high curvature. The critical points determined by this method are based on a set of axes that are dependent on the shape itself. Therefore, the points determined are independent of size, rotation, or relative displacement. The line of sight of a point concept is also introduced and subsequently utilized to extract features from a shape. These features are then compared to the features of other shape by a syntactic procedure for the purpose of recognizing whether a shape is a partial shape or a shape in its own right.

Journal ArticleDOI
TL;DR: A pitch tracking algorithm that uses perceptually motivated features to identify the first peak of each pitch period in the speech waveform during voiced portions of speech and a multi‐variate classifier makes decisions about the location of pitch marks based on the values of the feature measurements.
Abstract: A pitch tracking algorithm was designed that uses perceptually motivated features to identify the first peak of each pitch period in the speech waveform during voiced portions of speech. The feature measurement algorithms were designed to capture all of the information that a person uses to identify pitch marks when looking at a waveform display. A multi‐variate classifier makes decisions about the location of pitch marks based on the values of the feature measurements. This classifier was designed using Classgraph—a program that allows the user to examine hand‐labeled data and make decision boundaries in the multidimensional feature space. Performance was evaluated by comparing pitch marks generated by the algorithm with hand‐labeled pitch marks on a database of speakers each saying a different sentence. Each sentence was hand‐labeled by two people. The agreement among labelers was within 1% of the agreement between each labeler and the output of the algorithm. [Supported by NSF and DARAP.]

01 Jan 1985
TL;DR: The smart sensor problem is studied, new concepts are developed, new algorithms for implementing an intelligent enormous matrix inversion are proposed, and research areas for further exploration are discussed.
Abstract: To design lightweight smart sensor systems which are capable of outputting motion-invariant features useful for automatic pattern recognition systems, we must turn to the simultaneous image processing and feature extraction capability of the human visual system (HVS) to enable operation in real time, on a mobile platform, and in a "natural environment." This dissertation studies the smart sensor problem, develops new concepts, supported by simulation, and discusses research areas for further exploration. An n('2) parallel data throughput architecture implemented through a hardwired algotecture which accomplishes, without computation, an equivalent logarithmic coordinate mapping is presented. The algotecture mapping provides, at the sensor level, the ability to change scales and rotations in the input plane to shifts in the algotecture mapped space. The resulting invariant leading edge is shown to possess an intensity preserving property for arbitrary variations of image size. The sensitivity of the algotecture to center mismatch is discussed in terms of the difference between coordinate and functional transformation methods. A mathematical link between the lateral subtractive inhibition (SI) and multiple spatial filtering (MSF) mechanisms of the HVS coexist and function simultaneously. The feature extraction filter in visual neurophysiology, known as the novelty filter, is identified to be the first feedback term of an iterative expansion of the sensory mapping point spread function. The use of the algotecture space combined with the image plane MSF approach is explored for detection and classification using template crosscorrelation methods of recognition. The concept of using a three spatial frequency band model, based upon HVS physiological and psychophysical data, for an intra-class, and inter-class, and a membership identification classification scheme is introduced. This concept is extended to represent the feature vector entries for, not only each spatial frequency band, but for each image view angle in the recognition library. A new algorithm for implementing an intelligent enormous matrix inversion is proposed. Such an inverse problem exists in the solution of the negative feedback equation for SI and has a form that lends itself easily to parallel processing. The solution provides for a means to solve the inversion even though one of the partitioned submatrices is singular. Procedures are given to construct a partition tree which is analogous to quadtree partitioning methods in image processing. Finally a simple matrix inversion example is worked out to demonstrate how the algorithm works.

Proceedings ArticleDOI
11 Dec 1985
TL;DR: In this article, an attractive feature space (chord distributions) for pattern recognition is discussed, and extensions to 3D in-plane and out-of-plane distortion-invariant object recognition are presented.
Abstract: An attractive feature space (chord distributions) for pattern recognition is discussed. New advancements presented are: extensions to 3-D in-plane and out-of-plane distortion-invariant object recognition; new techniques to allow estimation of in-plane distortion parameters; and a new technique to achieve class estimation in the presence of multiple distortions. Quantitative results are provided for a ship data base (for out-of-plane distortions) and for an aircraft data base (for in-plane distortions).

Proceedings ArticleDOI
11 Dec 1985
TL;DR: The use of optical Fourier transform and computer generated hologram (CGH) techniques allows a high-dimensionality feature space to be produced in parallel and initial simulation results using a ship image data base are presented.
Abstract: The use of optical Fourier transform and computer generated hologram (CGH) techniques allows a high-dimensionality feature space to be produced in parallel. By the proper coordinate transformation CGH, a position, rotation and shift invariant feature space results. The use of synthetic discriminant functions (SDF) and CGH techniques allows high-dimensionality of optical linear discriminant functions (LDFs) to be produced. These optical LDFs allow high-dimensionality and when designed by SDF techniques, 3-D distortion-invariance results. Initial simulation results using a ship image data base are presented.

Journal ArticleDOI
J. Ackenhusen1, Y. H. Oh1
TL;DR: A single-chip implementation of Linear Predictive Coding (LPC)-based feature measurement for speech recognition, called the FXDSP, has been developed by programming the AT&T DSP20™ programmable Digital Signal Processor and has been verified by both numerical simulation and system use.
Abstract: A single-chip implementation of Linear Predictive Coding (LPC)-based feature measurement for speech recognition, called the Feature Extracting Digital Signal Processor (FXDSP), has been developed by programming the AT&T DSP20™ programmable Digital Signal Processor (DSP) and has been verified by both numerical simulation and system use. For identical input, the recognition distance between floating point simulation and the DSP implementation was found to be negligibly small when compared with distances for word matches. The feature-measurement technique is identical to that used in numerical simulations of LPC-based isolated- and connected-word recognition using combinations of dynamic time warping, vector quantization, and hidden Markov modeling. As a result, the FXDSP represents a single-chip common building block for real-time implementation of most speech recognition techniques under investigation at AT&T Bell Laboratories. The FXDSP performs eighth-order LPC analysis on speech received from a standard CODEC. In every frame period (15 ms) it produces a feature vector consisting of the log energy, nine amplitude-normalized autocorrelation coefficients, and nine LPC-based test-pattern coefficients. The feature-measurement program requires 1023 locations of the 1024 available in on-chip program ROM, 211 of 256 available RAM locations, and 75 percent of available real time.

Proceedings ArticleDOI
J. Ackenhusen1, Y. H. Oh1
01 Apr 1985
TL;DR: A single-chip implementation of Linear Predictive Coding (LPC)-based feature measurement for speech recognition, called the FXDSP, has been developed by programming the AT&T DSP20™ programmable Digital Signal Processor and has been verified by both numerical simulation and system use.
Abstract: A single chip implementation of LPC-based feature measurement has been developed using the AT&T Bell Laboratories Digital Signal Processor and has been verified by both numerical simulation and system use. The feature measurement circuit, called the FXDSP, performs eighth-order LPC analysis continuously in real time. It receives mu-law-encoded telephone bandwidth speech at a 6.667 kHz sampling rate from a standard CODEC and produces a feature vector consisting of the log energy, nine amplitude-normalized autocorrelation coefficients, and nine LPC-based test pattern coefficients for each analysis frame of speech. Feature vectors are output continuously at a frame period of 15 msec. The feature measurement program requires 1023 locations of the 1024 available in on-chip program ROM, 211 of 256 available RAM locations, and 75% of available real time. The output of the FXDSP has been compared to a floating point FORTRAN program calculating on identical speech waveforms. An LPC-based log likelihood distance between floating point simulation and FXDSP implementation was found to be negligibly small (average distance of 0.021) when compared with distances for word matches in speech recognition (average distance of 0.45).

Journal ArticleDOI
01 Jan 1985
TL;DR: A new representation concept, named the teaching space approach, for the pattern classification training theory is proposed as an alternative to the feature space and the weight space approach used in the contemporary pattern classification theory.
Abstract: A new representation concept, named the teaching space approach, for the pattern classification training theory is proposed as an alternative to the feature space and the weight space approach used in the contemporary pattern classification theory The concept is introduced formally by means of a representation theorem A model of the training process is given by the theorem that makes transparent the essential factors of the pattern classification training This result is significant in the development of a theory of teaching systems, which is relevant to areas such as pattern recognition, neural networks, associative memories, robot training, and human training

Proceedings ArticleDOI
R. Oka1
01 Apr 1985
TL;DR: A new model called Vector Field Model is proposed for providing new algorithms of both segmentation and feature extraction in order to recognize phonemic units in continuous speech spoken by many speakers.
Abstract: A new model called Vector Field Model is proposed for providing new algorithms of both segmentation and feature extraction in order to recognize phonemic units in continuous speech spoken by many speakers The original vector field is obtained by differentiating a time-frequency pattern (the output of band-pass filters) In order to extract steady , increasing transient or decreasing transient feature of the point on the time-frequency pattern, three auxiliary vector fields are created by characterizing coherent orientations of vectors The crowded vectors in an arbitary auxiliary vector field produce a pseudo-phonemic segment Recognition of /VCV/ is carried out by applying so-called Continuous Dynamic Programming to a segment sequence pattern

Patent
05 Jun 1985
TL;DR: In this article, a method for automatic detection of speech signals in the presence of noise including noise events occurring when speech is not present and having signals whose signal strengths are substantially equal to or greater than the speech signals is presented.
Abstract: An apparatus and method for automatic detection of speech signals in the presence of noise including noise events occurring when speech is not present and having signals whose signal strengths are substantially equal to or greater than the speech signals. Frames of data representing digitized output signals from a plurality of frequency filters are operated on by a linear feature vector to create a scalar feature for each frame which is indicative of whether the frame is to be associated with speech signals or noise event signals. The scalar features are compared with a detection threshold value which is created and updated from a plurality of previously stored scalar features. A plurality of the results of the comparison for a succession of frames is stored and the stored results combined in a predetermined way to obtain an indication of when speech signals are present. In automatic speech recognizers employing the above-described speech detections, when such indication is given, frames are further preprocessed and then compared with stored templates in accordance with the dynamic programming algorithm in order to recognize which word was spoken.

Proceedings ArticleDOI
TL;DR: A number of artificial intelligence techniques which allow symbolic information to be exploited in conjunction with numerical data to improve object classification performance are described.
Abstract: Image processing technology concentrates on the development of data extraction techniques applied toward the statistical classification of visual imagery. In classical image processing systems, an image is [1] preprocessed to remove noise, [2] segmented to produce close object boundaries, [3] analyzed to extract a representative feature vector, and [4] compared to ideal object feature vectors by a classifier to determine the nearest object classification and its associated confidence level. This type of processing attempts to formulate a two-dimensional interpretation of three-dimensional scenes using local statistical analysis, an entirely numerical process. Symbolic information dealing with contextual relationships, object attributes, and physical constraints is ignored in such an approach. This paper describes a number of artificial intelligence techniques which allow symbolic information to be exploited in conjunction with numerical data to improve object classification performance.

Journal ArticleDOI
TL;DR: A hierarchical classification technique based on CART (Classification and Regression Trees) and its application to the task of speech analysis is described and it is believed that this technique provides an intuitive mechanism for quantifying the acoustic cues of phonetic contrasts.
Abstract: In this paper we describe a hierarchical classification technique based on CART (Classification and Regression Trees, see Breiman et al., 1984) and its application to the task of speech analysis. Our investigation is motivated by two reasons. First, we believe that this technique provides an intuitive mechanism for quantifying the acoustic cues of phonetic contrasts. Second, the technique can potentially help us develop classifiers that are useful for automatic speech recognition. Towards these goals, we have added a number of features to the basic CART algorithm, and have expanded it into an exploratory data analysis tool. For example, CART uses a predetermined criterion for partitioning the feature space. We have added the capability for users to manually perform this partitioning. In addition, we have implemented a number of statistical functions for univariate and multivariate data analysis, and added graphical facilities for viewing data in different ways. Most importantly, we have integrated CART with SPIRE, our primary acoustic‐phonetic analysis tool. Examples of how CART can be used for speech analysis and classifier design will be presented. Comparisons with other classification procedures will also be included. [Work supported by AT&T Bell Laboratories Cooperative Research Fellowship and by the Office of Naval Research under contract N00014‐82‐K‐0727.]

Journal ArticleDOI
TL;DR: An empirical method for the experimental assigning of specimens to a set of standard specimens was developed especially for stochastic scenes and provides information on the separability of the standard specimen and the significance of assignment.
Abstract: SUMMARY This paper describes an empirical method for the experimental assigning of specimens to a set of standard specimens. The procedure was developed especially for stochastic scenes. Typical applications of such a procedure are found in material sciences in assigning specimens to a standard series. The assignment is based on feature vectors obtained from analyses of the specimen texture by means of erosion or opening. In cases where the arrangement of particles is characteristic of the specimen, dilation or closing are used to obtain a feature vector. To increase the sensitivity of the method described the use of more than one structuring element is recommended in the analysis. Besides the classification of an unknown specimen the procedure provides information on the separability of the standard specimen and the significance of assignment.

Book ChapterDOI
01 Jan 1985
TL;DR: This chapter deals with the problem of ‘econstructing’ the unknown sequence {x(i)}, i = 1,2,... of feature vectors in the (LRF) model from the input and output observations, which is known as Kaiman filtering.
Abstract: In this chapter we deal with the problem of ‘econstructing’ the unknown sequence {x(i)}, i = 1,2,... of feature vectors in the (LRF) model from the input and output observations, which is known as Kaiman filtering.

Journal ArticleDOI
TL;DR: The use of identification and pattern recognition tools for the surveillance and diagnosis of the vibrating behaviour of nuclear reactor components are described and some results are shown.

Proceedings ArticleDOI
11 Dec 1985
TL;DR: Simulation results for crosscorrelation template matching in the algotecture space, as opposed to standard rectilinear coordinate space, support the claim that the al gotecture mapping is less sensitive to centroid mismatch and a sliding window differencing similarity measure is proposed.
Abstract: The exponential non-uniform to uniform hardwired spatial coordinate transformation inherently imbeds an algorithm in the hardware architecture and has thus been called an algotecture. It has been suggested that the algotecture described may be more sensitive to centroid pointing errors than conventional cartesian grid structures. Simulation results for crosscorrelation template matching in the algotecture space, as opposed to standard rectilinear coordinate space, is presented for the case of annulii images with various centroid mismatch. These simulations support the claim that the algotecture mapping is less sensitive to centroid mismatch. The use of template matching on an algotecture mapped grey scale image shows the feasibility of using this technique on more complex images. Since crosscorrelation is a relatively time consuming operation, a sliding window differencing similarity measure is proposed to accomplish fast detection in the algotecture mapped space directly at the sensor level. Coupling this idea with the classification of objects via the formation of orthogonal feature vectors contained in separate spatial frequency channels which are constrained by human visual system physiological data provides a fast method of object classification which exploits the fact that different features occur in different spatial frequency bands. Finally, the use of a three spatial frequency bandpass feature extraction filter system useful for an intra-class, inter-class, and membership identification classification scheme is discussed.

Journal ArticleDOI
TL;DR: A label space is defined as a space to which the reference points of a feature space can be mapped, and the measurement of similarity in the space of linear prediction features can benefit from this mapping.