scispace - formally typeset
Search or ask a question

Showing papers on "Feature vector published in 1981"


01 Jan 1981
TL;DR: In this paper, an optimal registration method for matching two and three dimensional deformed images has been developed, where the deformation part of the cost function is measured by the strain energy of the deformed image and the mapping obtained by the registration process is optimal with respect to this cost function.
Abstract: Motivated by the need to locate and identify objects in three dimensional CT images, an optimal registration method for matching two and three dimensional deformed images has been developed. This method was used to find optimal mappings between CT images and an atlas image of the same anatomy. Using these mappings, object boundaries from the atlas was superimposed on the CT images. A cost function of the form DEFORMATION-SIMILARITY is associated with each mapping between the two images. The mapping obtained by our registration process is optimal with respect to this cost function. The registration process simulates a model in which one of the images made from an elastic material is deformed until it matches the other image. The cross correlation function which measures the similarity between the two images serves as a potential function from which the forces required to deform the image are derived. The deformation part of the cost function is measured by the strain energy of the deformed image. Therefore, the cost function of a mapping is given in this model by the total energy of the elastic image. The optimal mapping is obtained by finding the equilibrium state of the elastic image, which by definition corresponds to a local minimum of the total energy. The equilibrium state is obtained by solving a set of partial differential equations taken from the linear theory of elasticity. These equations are solved iteratively using the finite differences approximation on a grid which describes the mapping. The image function in a spherical region around each grid point is described by its projections on a set of orthogonal functions. The cross correlation function between the image functions in two regions is computed from these projections which, serve as the components of a feature vector associated with the grid points. In each iteration step of the process, the values of the projections are modified according to the currently approximated deformation. The method was tested by registering several two and three dimensional image pairs. It can also be used to obtain the optimal mapping between two regions from a set of corresponding points (with and without error estimates) in these regions.

386 citations


Journal ArticleDOI
TL;DR: Improved techniques for automatic recognition of human face profiles are reported, each face description being a 17-dimensional feature vector whose components were formed from the averages of three sittings.

166 citations


Journal ArticleDOI
TL;DR: This paper presents a method for evaluating the properties of features that describe the shape of a QRS complex by examining the distances in the feature space for a class of nearly similar complexes.
Abstract: Automated classification of ECG patterns is facilitated by careful selection of waveform features This paper presents a method for evaluating the properties of features that describe the shape of a QRS complex By examining the distances in the feature space for a class of nearly similar complexes, shape transitions which are poorly described by the feature under investigation can be readily identified To obtain a continuous range of waveforms, which is required by the method, a mathematical model is used to simulate the QRS complexes

114 citations


Proceedings ArticleDOI
01 Apr 1981
TL;DR: A fast nonlinear time alignment method is presented, which is based on a preprocessing of the normalized speech spectrogram by means of a segmentation of the trace in the spectral feature space, which offers savings in computing time by a factor of 10 or more as compared to conventional dynamic programming.
Abstract: A fast nonlinear time alignment method is presented, which is based on a preprocessing of the normalized speech spectrogram by means of a segmentation of the trace in the spectral feature space. After such trace segmentation the patterns have a fixed format and allow for a subsequent classification with a distance measure which is obtained from conventional dynamic programming with extreme constraints. Since, due to the trace segmentation preprocessing, these extreme constraints can be applied without performance degradation, the described method offers savings in computing time by a factor of 10 or more as compared to conventional dynamic programming. As a side benefit, reference pattern memory savings by a factor of 3 or more are obtained.

54 citations


PatentDOI
TL;DR: In this paper, a similarity measure is calculated from comparing selected feature vectors among an input speech signal sequence of feature vectors (A) and a selected sequence (B) of reference vectors selected from a plurality of pre-stored reference sequences.
Abstract: Speaker recognition is decided by a similarity measure (D) calculated from comparing selected feature vectors among an input speech signal sequence of feature vectors (A) and a selected sequence (B) of reference vectors selected from a plurality of pre-stored reference sequences. Prior to comparison of the input and reference vector sequences, the two sequences are time normalized to align corresponding feature vectors. A significant sound specifying signal (V) including a time sequence of elementary signals is generated in synchronism with one of the input and reference sequences and indicates which feature vectors in that one of the input and reference sequences are considered to represent significant sound. The similarity measure (D) is then calculated in accordance with the comparison of those feature vectors in the one sequence which are indicated by the significant sound specifying signal as representing significant sound and the corresponding feature vectors of the other sequence.

29 citations


Patent
20 Apr 1981
TL;DR: In this article, the similarity between first and second patterns, each represented by a sequence of feature vectors, comprises a calculating circuit for calculating a weighting factor from feature vectors in the first pattern, a weighted circuit for applying the weighting factors to the feature vectors of the second pattern to calculate an estimated vector for the second one.
Abstract: A computer for calculating the similarity between first and second patterns, each represented by a sequence of feature vectors, comprises a calculating circuit for calculating a weighting factor from feature vectors in the first pattern, a weighting circuit for applying the weighting factor to the feature vectors of the second pattern to calculate an estimated vector for the second pattern, and a similarity calculating circuit for calculating and determining the similarity between the first and the second patterns on the basis of the feature vector of the first pattern and of the estimated vector of the second pattern.

24 citations


Proceedings Article
24 Aug 1981
TL;DR: A method is presented for matching two scene descriptions, each of which consists of a set of measured feature vectors with estimated uncertainties, which performs a search by sequentially matching features of one scene to those of the other scene.
Abstract: A method is presented for matching two scene descriptions, each of which consists of a set of measured feature vectors with estimated uncertainties. The two scenes differ by a transformation that depends on a few unknown parameters. The method performs a search by sequentially matching features of one scene to those of the other scene, solving for the transformation parameters by means of a generalized least-squares adjustment, computing the probabilities of these matches by means of Bayes theorem, and using these probabilities to prune the search. An example is given using scene descriptions of the Martian surface in which the features are rocks approximated by ellipsoids.

21 citations


Proceedings Article
24 Aug 1981
TL;DR: A system learning concepts from training samples consisting of structured objects based on descriptions invariant under isomorphism is described and it is shown that the algorithm can successfully handle practical problems with samples of about one hundred of relatively complicated structures in a reasonable time.
Abstract: A system learning concepts from training samples consisting of structured objects is described. It is based on descriptions invariant under isomorphism. In order to get a unified mathematical formalism recent graph theoretic results are used- The structures are transformed into feature vectors and after that a concept learning algorithm developing decision trees is applied which is an extension of algorithms found in psychological experiments. It corresponds to a general-to-specific depth-first search with reexamination of past events. The generalization ability is demonstrated by means of the blocks world example and it is shown that the algorithm can successfully handle practical problems with samples of about one hundred of relatively complicated structures in a reasonable time. Additionally, the problem of representation and learning context dependent concepts is discussed in the paper.

17 citations


Journal ArticleDOI
TL;DR: In this paper, a method for sonar target recognition is extended to include target echoes from an arbitrary aspect, and features used for target recognition are the sampled frequency responses of the targets.
Abstract: A method for sonar target recognition is extended to include target echoes from an arbitrary aspect. The features used for target recognition are the sampled frequency responses of the targets. Since the feature vector varies strongly with aspect angle, a method for extracting linear combinations of features which tend to be aspect independent is applied. The method was tested on experimental echoes using a variety of recognition parameters. The probability of misrecognition ranged from 0.035 to 0.2.

16 citations


Book ChapterDOI
01 Jan 1981
TL;DR: This final chapter considers several fuzzy algorithms that effect partitions of feature space ℝ p, enabling classification of unlabeled (future) observations, based on the decision functions which characterize the classifier.
Abstract: In this final chapter we consider several fuzzy algorithms that effect partitions of feature space ℝ p , enabling classification of unlabeled (future) observations, based on the decision functions which characterize the classifier. S25 describes the general problem in terms of a canonical classifier, and briefly discusses Bayesian statistical decision theory. In S26 estimation of the parameters of a mixed multivariate normal distribution via statistical (maximum likelihood) and fuzzy (c-means) methods is illustrated. Both methods generate very similar estimates of the optimal Bayesian classifier. S27 considers the utilization of the prototypical means generated by (A11.1) for characterization of a (single) nearest prototype classifier, and compares its empirical performance to the well-known k-nearest-neighbor family of deterministic classifiers. In S28, an implicit classifier design based on Ruspini’s algorithm is discussed and exemplified.

15 citations


Journal Article
TL;DR: An iterative procedure, the so-called power method, for finding a multivariate distribution's eigenvectors and eigenvalues is demonstrated and the projection of feature vectors onto the principal components is shown.
Abstract: The principal components transformation offers an effective methods for dimensionality reduction and for the assessment of the mutual dependence of observed variables in a data set. An iterative procedure, the so-called power method, for finding a multivariate distribution's eigenvectors and eigenvalues is demonstrated. The projection of feature vectors onto the principal components is shown.

Book ChapterDOI
01 Jan 1981
TL;DR: The ordinary multivariate Gaussian classification algorithm will be reviewed, but some more recent methods will also be mentioned and practical problems in designing a classifier will be discussed, and the theory described will be illustrated on a relatively large data base of Eurasian earthquakes and explosions.
Abstract: The problem of discriminating between earthquakes and underground nuclear explosions is formulated as a problem in pattern recognition. As such it may be separated into two stages, feature extraction and classification. Various ways of doing feature extraction will be discussed. Among the techniques mentioned will be univariate and multivariate autoregressive representation, Karhunen-Loeve expansions, geophysical parameters and spectral parameters. The ordinary multivariate Gaussian classification algorithm will be reviewed, but some more recent methods will also be mentioned and practical problems in designing a classifier will be discussed. The theory described will be illustrated on a relatively large data base of Eurasian earthquakes and explosions.

Dissertation
01 Jan 1981
TL;DR: Improvements in recognition accuracy over previously reported results have been obtained by several techniques, including an improved feature set containing local character level information, a weighted metric for classification, the development of more sophisticated preprocessing techniques, and the use of unsupervised learning, which allows the machine to use unknown script input samples to update its "dictionary" as the style of the handwriting varies with time.
Abstract: New techniques and features have been developed for machine recognition of cursive script. Some experimental results have been obtained and are described in this thesis. Cursive script is input to a computer using a Rand type graphics tablet. The cursive script pattern is stored as a chronological sequence of X and Y coordinates. New preprocessing techniques eliminate input noise and employ feature extraction utilizing special features to perform "prerecognition," a phase in which certain characteristics are recognized before the actual recognition phase is entered in order to aid in script orientation and normalization. Preprocessing operations performed include rotation, vertical and horizontal scaling and deskewing along with some gap filling and spot noise elimination. These operations are performed in a "closed-loop" manner in which verification is performed after each operation. The process may be repeated until satisfactory results are obtained. Next, feature extraction in performed and recognition is achieved by the application of a modified k-nearest neighbor (k-NN) rule. Cursive script words are "pre-selected" as most likely class candidates by a "distance to class mean" measure. The k-NN algorithm is applied only to the pre-selected classes. Class locations are established by supervised machine learning, in which identified cursive script samples are presented to the machine for feature extraction analysis. The resulting parameters are stored in a "dictionary" for future reference. For testing purposes a special set of computer selected words is used. These words are chosen from the set of all possible two letter words for difficulty of discrimination (by the recognition algorithms). For selection purposes words are generated by computer linking of handwritten letters using a cubic spline algorithm. Of the 676 possible two letter classes, the twenty that are most densely packed in the feature space (and, hence, most difficult to discriminate between) are obtained by a computer search. These words are then linked to form a dense feature space distribution of 400 four letter classes which is again searched. The twenty most difficult to recognize four letter words thus obtained are used as the initial testing set. Results on 200 samples of the author's handwriting indicate that greater than 90% recognition accuracy is achievable using this "worst case" set of words. Better performance can be expected for virtually all other sets of words. Improvements in recognition accuracy over previously reported results have been obtained by several techniques. These include an improved feature set containing local character level information, a weighted metric for classification, the development of more sophisticated preprocessing techniques, and the use of unsupervised learning, which allows the machine to use unknown script input samples to update its "dictionary" as the style of the handwriting varies with time or with various script authors. From individual handwriting styles it is possible to recognize the script author and utilize this information to more accurately recognize handwriting by providing individual "dictionaries" for each author. Substantial recognition accuracy improvements are expected when the machine is expanded to read at the sentence level, thus allowing contextual information to be used.

Proceedings ArticleDOI
07 Dec 1981
TL;DR: An algorithm to classify ships from images generated by an infrared (IR) imaging sensor based on decision-theoretic classification of Moment Invariant Functions (MIFs) shows a good potential for ship screening.
Abstract: An algorithm to classify ships from images generated by an infrared (IR) imaging sensor is described. The algorithm is based on decision-theoretic classification of Moment Invariant Functions (MIFs). The MIFs are computed from two-dimensional gray-level images to form a feature vector uniquely describing the ship. The MIF feature vector is classified by a Distance-Weighted k-Nearest Neighbor (D-W k-NN) decision rule to identify the ship type. Significant advantage of the MIF feature extraction coupled with D-W k-NN classification is the invariance of the classification accuracies to ship/sensor orienta-tion - aspect, depression, roll angles and range. The accuracy observed from a set of simulated IR test images reveals a good potential of the classifier algorithm for ship screening.

Journal ArticleDOI
TL;DR: In a discrete feature space, in which a multi-nomial distribution function has been assumed to exist, the expected classification error, based on fuzzy labels, can be more accurate than the one based on hard labels.

Proceedings ArticleDOI
01 Apr 1981
TL;DR: A feature extraction technique based on autoregressive (AR) modelling is derived that generates a feature vector whose components are the coefficients of the "best" AR fit to the data.
Abstract: To avoid some of the problems that arise in the pattern recognition of waveforms using conventional feature extractors, we derive a feature extraction technique based on autoregressive (AR) modelling. The technique generates a feature vector whose components are the coefficients of the "best" AR fit to the data. Levinson's algorithm and Akaike's Information Criterion (AIC) are used to help furnish the best coefficients and best AR order respectively.

Book ChapterDOI
K.S. Fu1
01 Jan 1981
TL;DR: In the decision theoretic approach, a set of characteristic measurements, called features, are extracted from the patterns, and the recognition of each pattern is usually made by partitioning the feature space as discussed by the authors.
Abstract: Publisher Summary This chapter presents recent progress in syntactic pattern recognition. The many different mathematical techniques used to solve pattern recognition problems may be grouped into two general approaches, namely, the decision theoretic (or discriminant) approach and the syntactic (or structural) approach. In the decision theoretic approach, a set of characteristic measurements, called features, are extracted from the patterns. Each pattern is represented by a feature vector, and the recognition of each pattern is usually made by partitioning the feature space. In the syntactic approach, each pattern is expressed as a composition of its components, called sub-patterns and pattern primitives. This approach draws an analogy between the structure of patterns and the syntax of a language. The recognition of each pattern is usually made by parsing the pattern structure according to a given set of syntax rules. With the recent development of distance or similarity measures between syntactic patterns and error-correcting parsing procedures, the flexibility of syntactic methods has been greatly expanded. Errors occurring at the lower-level processing of a pattern (segmentation and primitive recognition) could be compensated at the higher level using structural information. Using a distance or similarity measure, nearest-neighbor and k-nearest-neighbor classification rules can be easily applied to syntactic patterns.

Journal ArticleDOI
TL;DR: The relations between the measurable variables, which are the probabilities of detection (PD curves), and the characteristic variables of the recognition system are established analytically.
Abstract: This paper addresses the problem of analyzing biological pattern recognition systems. As no complete analysis is possible due to limited observability, the theoretical part of the paper examines some principles of construction for recognition systems. The relations between measurable and characteristic variables of these systems are described. The results of the study are: 1. Human recognition systems can always be described by a model consisting of an analyzer (FA) and a linear classifier. 2. The linearity of the classifier places no limits on the universal validity of the model. The principle of organization of such a system may be put into effect in many different ways. 3. The analyzer function FA determines the transformation of external patterns into their internal representations. For the experiments described in this paper, FA can be approximated by a filtering operation and a transformation of features (contour line filter). 4. Narrow band filtering (comb filter) in the space frequency domain is inadequate for pattern recognition because noise of different bandwidths and mean frequencies affects sinusoidal gratings differently. This excludes the use of a Fourier analyzer. 5. The relations between the measurable variables, which are the probabilities of detection (PD curves), and the characteristic variables of the recognition system are established analytically. 6. The probability of detection not only depends on signal energy but also on signal structure. This would not be the case in a simple matched filter system. 7. The differing probabilities of error in multiple detection experiments show that the interference is pattern specific and the bandwidth (steepness of the PD curves) is different for the different sets of patterns. 8. The distance between the reference vectors in feature space can be determined from the internal representation of the patterns defined by the model. Through multiple detection experiments it is possible to determine not only the relative distances between the patterns but also their absolute position in feature space.

Journal ArticleDOI
J. Fehlauer1, B. Eisenstein
TL;DR: The PSM feature extraction technique is applied to a flaw characterization problem arising from ultrasonic nondestructive testing of materials and a deconvolution procedure is used to enhance pattern class discrimination.
Abstract: This paper focuses on extracting features from time series for pattern recognition. System identification techniques are used to represent the signals by a parameterized system model (PSM) with the parameter vector describing the PSM becoming the feature vector. A deconvolution procedure is used to enhance pattern class discrimination. The advantages of the PSM approach is a reduction of the dimensionality of the feature space thereby simplifying the classifier design and evaluation. The PSM feature extraction technique is applied to a flaw characterization problem arising from ultrasonic nondestructive testing of materials.

Proceedings ArticleDOI
B.V. Dasarathy1
05 Apr 1981
TL;DR: The proposed approach is the development of a set of target range and orientation independent features descriptive of the target geometries underlying the sensed point ensembles, which facilitates clustering of like targets and the corresponding point ensemble in the multidimensional feature space wherein each ensemble is represented by a single point.
Abstract: Recognition of targets characterized by point ensembles, for example, a set of FLIR sensed hot spots or radar detected reflectors, represents the topic of this study. B asic to the proposed approach is the development of a set of target range and orientation independent features descriptive of the target geometries underlying the sensed point ensembles. This facilitates clustering of like targets and the corresponding point ensembles in the multidimensional feature space wherein each ensemble is represented by a single point, thereby leading to clusters of like ensembles. This then permits deployment of traditional pattern recognition tools for identification of unknown targets. Details of the feature set selection process and test implementation results are presented to bring out the scope and potential of the new methodology developed in this study. with little or no details offered openly, the excuse being the obvious defense application potential. The ensuing sections present our approach to this feature selection problem and the experimental e vidence acquired which supports the methodology developed here.

Proceedings ArticleDOI
07 Dec 1981
TL;DR: Hu et al. as mentioned in this paper applied classical pattern recognition technology to automatically classify ships using Forward Looking Infrared (FLIR) images, which is based on the extraction of features which uniquely describe the classes of ships.
Abstract: The Naval Weapons Center (NWC) is currently developing automatic target classification systems for future surveillance and attack aircraft and missile seekers. Target classification has been identified as a critical operational capability which should be included on new Navy aircraft and missile developments or systems undergoing significant modifications. The objective for the Automatic Classification Infrared Ship Imagery System is to provide the following new capablities for surveillance and attack aircraft and antiship missiles: near real-time automatic classification of ships in day and night at long standoff ranges with a wide area coverage imaging infrared sensor. The sensor applies classical pattern recognition technology to automatically classify ships using Forward Looking Infrared (FLIR) images. Automatic Classification of Infrared Ship Imagery is based on the extraction of features which uniquely describe the classes of ships. These features are used in conjunction with decision rules which are established during a training phase. Conventional classification techniques require labeled samples of all expected targets, threats and non-threats for this training phase. To overcome the resulting need for the collection of an immense data base, NWC developed a Generalized Classifier which, in the training phase, requires signals only from the targets of interest, such as high value combatant threats. In the testing phase, the signals from the combatants are classified and signals from other ships, which are sufficiently different from the training data, are classified as "other" targets. This technique provides a considerable savings in computer processing time, in memory requirements and data collection efforts. Since sufficient IIR images of the appropriate quality and quantity were not available for investigating automatic IIR ship classification, TV images of ship models were used for an initial feasibility demonstration. The initial investigation made use of the experience gained with preprocessing and classifying ROR and ISAR data. For this reason, the most expedient method was to collapse the 2-dimensional TV ship images onto the longitudinal axis by summing the amplitude data in the vertical ship axis. The resulting 128 point 1-dimensional profiles show the silhouette of the ship and bear an obvious similarity with the radar data. Based on that observation, a 128 point Fourier transform was computed and the ten low order squared amplitudes of the complex Fourier coefficients were then used as feature vectors for the Generalized Classifier. In contrast to the radar data, the size of TV or IIR images of ships changes as a function of range. It is therefore necessary to develop feature extraction algorithms which are scale invariant. The central moments, which have scale and rotational invariant properties were therefore implemented. This method was suggested in 1962 by M. K. Hu (IRE Transactions on Information Theory). Using the moments alone resulted in unsatisfactory classification performance and indicated that edge enhancement was necessary and that the background needed to be rejected. The images were therefore processed with the Sobel nonlinear edge enhancement algorithm, which also has the desirable property that it works for images with low signal-to-noise ratios and poorly defined edges. Satisfactory results were obtained. In another experiment, the feature vector was composed of the five lower-order invariant moments and the five lower-order FFT coefficient squared magnitudes, excluding the zero frequency coefficient. This paper will describe the data base, the processing and classification techniques, discuss the results and addresses the topic of "Processing of Images and Data Optical Sensors."

Proceedings Article
01 Jan 1981
TL;DR: In this paper, a multichannel spectral operation yielding visual imagery which is enhanced in a specified spectral vicinity, guided by the statistics of training samples, is proposed, where the transformation is carried out by obtaining the training information, establishing the condition of the covariance matrix, determining the influenced solid, and initializing the lookup table.
Abstract: The Magnifying Glass Transformation (MGT) technique is proposed, as a multichannel spectral operation yielding visual imagery which is enhanced in a specified spectral vicinity, guided by the statistics of training samples. An application example is that in which the discrimination among spectral neighbors within an interactive display may be increased without altering distant object appearances or overall interpretation. A direct histogram specification technique is applied to the channels within the multispectral image so that a subset of the spectral domain occupies an increased fraction of the domain. The transformation is carried out by obtaining the training information, establishing the condition of the covariance matrix, determining the influenced solid, and initializing the lookup table. Finally, the image is transformed.

Book ChapterDOI
01 Jan 1981
TL;DR: The paper surveys hydrologic studies by the speaker and others in which pattern recognition concepts play a prominent role and techniques such as cluster analysis, feature extraction, and non-parametric regression are ingredients of the state-of-the art solutions to these questions.
Abstract: The paper surveys hydrologic studies by the speaker and others in which pattern recognition concepts play a prominent role. Data sets gathered from measurements of riverflow and water table pressure heads motivate relatively delicate statistical questions. Techniques such as cluster analysis, feature extraction, and non-parametric regression are ingredients of the state-of-the art solutions to these questions.

01 Dec 1981
TL;DR: Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described in this paper.
Abstract: Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

01 Jan 1981
TL;DR: A method of classifying MSS data is proposed which has some advantages in that it does not contain the usual pre-processing constraints, and caters for variation and correlation in the data.
Abstract: A method of classifying MSS data is proposed which has some advantages in that it does not contain the usual pre-processing constraints, and caters for variation and correlation in the data. In addition it is very e!;onomic in terms of computer time.

Patent
19 Nov 1981
TL;DR: In this article, the degree of analogy with use of the vector series of the input pattern presumed based on the relation among the vectors series of standard pattern is calculated to ensure a highly efficient matching of patterns.
Abstract: PURPOSE:To ensure a highly efficient mathcing of patterns, by calculating the degree of analogy with use of the vector series of the input pattern presumed based on the relation among the vector series of standard pattern CONSTITUTION:For a calculation of (i+1) stage, the feature vectors ai and ai+1 of standard pattern A are read iut of the storage part 1 Then GAMMAi=ai+1/ai is obtained at the division part 11, and a multiplication is carried out through the multiplier 13 between the presumed vectors b' and GAMMAi obtained at the stage (i) to obtain the vector (b) The vector bi of the input pattern B is read out of the input pattern storage part 2 The vectors bi and b are multiplied by the weight coefficient each by the multipliers 10 and 15, and the both vectors are added together through the adder 17 to obtain the presumed vector b' at the (i+1) stage The distance between vectors ai+1 and b' is calculated at the distance calculating part 20, and the degree of analogy is calculated at the analogy degree calculating part 21 for the (i+1) stage Thus the fluctuation component of the feature vector can be reduced to ensure a highly efficient matching of patterns

Book ChapterDOI
01 Jan 1981
TL;DR: Gradient descent as a technique for finding the minimum of a loss function J(v) was introduced in Section 2.10 of this treatise on gradient descent.
Abstract: Gradient descent as a technique for finding the minimum of a loss function J(v) was introduced in Section 2.10. Recall that the technique consists of finding the gradient ∇ J(v) and then adjusting the parameter vector v so that the change in v is in the direction of the negative of the gradient.