scispace - formally typeset
Search or ask a question

Showing papers on "Feature vector published in 1970"


Journal ArticleDOI
TL;DR: An approach is developed which can frequently be used to find a nonorthogonal transformation to project the patterns into a feature space of considerably lower dimensionality.
Abstract: It is known that R linearly separable classes of multidimensional pattern vectors can always be represented in a feature space of at most R dimensions. An approach is developed which can frequently be used to find a nonorthogonal transformation to project the patterns into a feature space of considerably lower dimensionality. Examples involving classification of handwritten and printed digits are used to illustrate the technique.

28 citations


01 Jun 1970
TL;DR: The nonparametric density estimation technique is shown to produce acceptable results with real data and demonstrate a definite advantage over a parametric procedure when multimodal data is involved.
Abstract: : The report investigates two approaches to pattern recognition which utilize information about pattern organization. First, a nonparametric method is developed for estimating the probability density functions associated with the pattern classes. The dispersion of the patterns in the feature space is used in attempting to optimize the estimate. The second approach involves the structural relationships of pattern components, an approach called 'linguistic' because it employs the concepts and methods of formal linguistics. The nonparametric density estimation technique is shown to produce acceptable results with real data and demonstrate a definite advantage over a parametric procedure when multimodal data is involved. Two alternative techniques are investigated for analyzing linguistic descriptions of patterns. Stochastic automata are considered as recognizers of stochastic pattern languages. The other technique is a stochastic generalization of the recently proposed programmed grammar which is developed as a grammar for pattern description. (Author)

11 citations


Journal Article
TL;DR: The results show that CDPC with Bhattacharyya classifier provides a good generalised performance for irregular shapes-based visual description as compared to the other experimental setups.
Abstract: Reduction of feature space of visual descriptors has become important due to the ‘curse of dimensionality’ problem. This paper reports the efficiency and effectiveness of the Compacted Dither Pattern Code (CDPC) combined with the Bhattacharyya classifier over MPEG-7 Dominant Colour Descriptor (DCD). Both the CDPC and DCD syntactic features use a compact feature space for colour representation. The algorithmic comparison between the two is presented in this paper, and demonstrates that there are several competitive advantages of CDPC in feature extraction and classification stages when compared to MPEG-7 DCD. The embedded texel properties, spatial colour arrangements, high compactness, and robust feature representation of CDPC have proven its effectiveness in our experimental study. Visual description experiments were conducted for ten irregular shapes-based visual concepts in videos with three setups namely CDPC with Bhattacharyya classifier, DCD without spatial coherency and DCD with spatial coherency. The visual descriptions were performed with the TRECVID 2007 development key frame dataset. The experimental results are presented in terms of three common performance measures. The results show that CDPC with Bhattacharyya classifier provides a good generalised performance for irregular shapes-based visual description as compared to the other experimental setups.

7 citations


Journal ArticleDOI
TL;DR: The results of a demonstration of a simple segmentation algorithm show that speech segmentation as defined is possible by non-human means.
Abstract: A brief argument is presented for the need for automatic speech segmentation both to facilitate automatic speech recognition and for its theoretical linguistic importance. The problem of speech segmentation in the acoustic domain using a digital computer is examined in detail, that is, of determining an acoustic partition in time which has linguistic relevance. This problem is viewed, in more general terms, as that of detecting transitions, in a globally non-stationary process, from one local stationary state to another. Non-stationary analyses are approximated by considering short fixed length time series sections as seen through a window which moves by a fixed increment. Various non-stationary signal representations are explored in order to establish a feature space suitable for applications to segmentation. Spectral representations are generated only as a reference space for comparison of an automatic segmentation procedure with the linguistically determined segmentation of any given speech sample. Temporal representations of the zero crossings of speech signals are explored in detail. In particular the central sample moments of the reciprocal zero crossings as a function of time are used as input to a simple segmentation algorithm. The results of a demonstration of this algorithm show that speech segmentation as defined is possible by non-human means.

6 citations


Patent
20 Aug 1970
TL;DR: A character recognition system utilizing an image dissector device to first develop an electronic image of a character and then to scan and to dissect the character into elemental areas is described in this article.
Abstract: A character recognition system utilizing an image dissector device to first develop an electronic image of a character and then to scan and to dissect the character into elemental areas. From this scan and dissect operation multiple analog signals are developed which simultaneously correspond to several of the elemental areas. Threshold circuits serve as quantizers and digitize the analog signals into digital signals with discrete levels representing black and white. These digital signals are supplied to logic circuitry which detects geometric features and encodes them as feature vectors. This vector information is stored in a computer memory along with the positional information for locating the vector. The positional information is derived from the scan driving circuitry. Classification and readout is subsequently made. In another embodiment, comparators are used for comparing the contrast between the elemental areas and developing outputs to be supplied to the logic circuitry.

6 citations


01 Jan 1970
TL;DR: The problem of classifying patterns from two classes is formulated here as a statistical decision pro­ blem, and Wald's sequential probability ratio test was used.
Abstract: Experiments involving sequential recognition tech­ niques and feature ordering schemes were performed on 23 feature samples of vowel spectra and 12 fea­ ture samples of remotely sensed agricultural crop data. Since each experiment dealt with two pattern classes, Wald's sequential probability ratio test was used. The test was implemented with both fixed and time-varying stopping boundaries. Feature ordering was accomplished by both dispersion analy­ sis and the divergence criterion. p(x|«,.)=[(2.)N/2 |K.| 1/2 ]-1 axp [-| (X M.)^1 (X It}] , i = 1,2 (2) then the above discriminant function yields D^X) log P( BI ) i log | K4 | |(X Mjf INTRODUCTION A pattern recognition system consists of a feature extractor and a classifier (see Figure l). The feature extractor makes measurements of salient characteristics of the input patterns. These are called feature measurements and based on them, the classifier assigns each input pattern to one of the possible pattern classes. We are concerned with those classifiers that are sequential in nature. That is, those that utilize the feature measurements one at a time in performing the classification. The advantages of sequential techniques are realized when the cost of taking feature measurements is high or the speed of classification is important. TECHNIQUES The problem of classifying patterns from two classes is formulated here as a statistical decision pro­ blem. N feature measurements, denoted by X^X^,*-*, Xjj, are given for each pattern. The two pattern classes are called u>j_ and u£. For each pattern class u>j, j = 1,2, it is assumed that the probabili­ ty density function of this feature vector X,p(XJ u>.)> is known* A /^•? *J A discriminant function, Di (X) = log 1,2 (1) is now defined which can easily be implemented by a Bayes classifyer. When DI(X) >Dj(X), i,j = 1,2, then X is said to be in class o^. When p(X | u^) i a 1,2, is a nmltivariate Gaussian density function with mean vector % and covariance matrix B, i.e., (5) This is the discriminant function used as the samples to be classified are assumed to be Gaussian in nature. In all recognition schemes used in this paper, the training procedure has: been to compute M^ and K. from the first 75 sannleB of each clsss. x For each sample to be classified, B^ and Bg were computed. If D^X) B2: (X) was positive, tte sample was placed in class. 1 and, if" negjative in class 2. In the above procedure it is necessaxy to me •' measurements from, each, pattern to be classified. Quite often this is inconvenient (because of «r time consumption) and it becomes desirable to a scheme using less feature measurements* Mien there are only two pattern classes to be recognised* Wald's sequential probability ratio "best (SRf) can be applied. Here the feature measurements can De taken one at a time* At the nth stageof the sequential process, that is, after the nth ftefcure measurement is taken, the classifier computes the sequential probability ratio

1 citations


10 Dec 1970
TL;DR: An entirely new class of algorithms is obtained by translating the pattern recognition problem into the problem of minimizing a function of several variables and selecting suitable functions, which includes most known algorithms as special cases.
Abstract: : The M-class pattern recognition problem is to construct a set of discriminant functions hwhich partition a feature space into M regions, one region per pattern class. Each point in the feature space is a potential pattern and each pattern represents an object. Almost nothing is assumed about the origins of the patterns. Distributions are not associated with the pattern classes. A set of training patterns is to be generalized into a set of discriminant functions which classify the potential patterns. The fundamental algorithms developed here concern the situation where the origin of each training pattern is known. An extension to the unsupervised case is also given. Several new multi-class decision-making algorithms are proposed. An entirely new class of algorithms is obtained by translating the pattern recognition problem into the problem of minimizing a function of several variables and selecting suitable functions. This general formulation includes most known algorithms as special cases. The class of algorithms includes all procedures which approximate discriminant functions by linear combinations of basis functions. Several sucessful two-class algorithms are extended to the M-class problem. (Author)

1 citations