scispace - formally typeset
Search or ask a question

Showing papers on "Feature extraction published in 1982"


Journal ArticleDOI
TL;DR: A new technique for matching image features to maps or models which forms all possible pairs of image features and model features which match on the basis of local evidence alone and which is robust with respect to changes of image orientation and content.
Abstract: A new technique is presented for matching image features to maps or models. The technique forms all possible pairs of image features and model features which match on the basis of local evidence alone. For each possible pair of matching features the parameters of an RST (rotation, scaling, and translation) transformation are derived. Clustering in the space of all possible RST parameter sets reveals a good global transformation which matches many image features to many model features. Results with a variety of data sets are presented which demonstrate that the technique does not require sophisticated feature detection and is robust with respect to changes of image orientation and content. Examples in both cartography and object detection are given.

304 citations


Journal ArticleDOI
01 Jun 1982

247 citations


Journal ArticleDOI
TL;DR: A computer method is proposed for the extraction of blood vessels from the retinal background; the recognition of arteries and veins; the detection and analysis of peculiar regions such as hemorrhages, exudates, optic discs and arterio-venous crossings.

195 citations


Journal ArticleDOI
TL;DR: The efficiency of the feature vector is demonstrated through experimental results obtained with some natural texture data and a simpler quadratic mean classifier.

114 citations


Journal ArticleDOI
TL;DR: The present research argued that the use of feature extraction processes might be a function of a subject’s familiarity with the symbols, and second, the number of symbols from which a presented symbol is sampled, and the findings support the generalization of a feature extraction interpretation to varying numbers of novel symbols of varying familiarity.
Abstract: Previous research has shown that the identification of rotated alphanumeric symbols seems to be performed via the extraction of critical features encoded invariant to the symbol’s orientation. The present research argued that the use of such feature extraction processes might be a function of, first, a subject’s familiarity with the symbols, and second, the number of symbols from which a presented symbol is sampled. Earlier research has used highly over learned alphanumerics, in sets of six symbols; this practice is argued here as being seemingly conducive to feature extraction. In two experiments, the generality of a feature extraction interpretation, in contrast to one of mental rotation, was tested by having subjects previously trained to relative high- vs. low-familiarity criteria identify novel symbols in conditions in which a presented symbol was 1 of either 5 or 20 possibilities. Identification response times were found to be constant across all nonstandard orientations of presented symbols, irrespective of symbol familiarity or symbol set size. The findings support the generalization of a feature extraction interpretation to varying numbers of novel symbols of varying familiarity.

80 citations


Journal ArticleDOI
TL;DR: Using the multitemporal multispectral data acquired by Landsat satellites and a physical model describing this behavior, new features that are crop specific have been derived and this feature space is two-dimensional irrespective of the number of Landsat observations.

67 citations


Proceedings ArticleDOI
01 May 1982
TL;DR: Although the constraints on this pilot study necessarily precluded feature ordering and selection, the application of the decision function to the evaluation subset resulted in an over-all 84% classification accuracy.
Abstract: The feasibility of a new approach to automatic language identification is examined in this pilot study The procedure involves the application of pattern analysis techniques to features extracted from the speech signal The database of the extracted features for five speakers from each of eight languges was divided into a learning subset and an evaluation subset A potential function was then generated for all features in the learning subset The complexity of the decision function was systematically increased until all members within the learning subset could be separated into the properly identified languages Although the constraints on this pilot study necessarily precluded feature ordering and selection, the application of the decision function to the evaluation subset resulted in an over-all 84% classification accuracy

54 citations


Book ChapterDOI
TL;DR: This chapter presents an overview of Optical Character Recognition for statisticians interested in extending their endeavors from the traditional realm of pattern classification to the many other alluring aspects of OCR.
Abstract: Publisher Summary This chapter presents an overview of Optical Character Recognition (OCR) for statisticians interested in extending their endeavors from the traditional realm of pattern classification to the many other alluring aspects of OCR. The most important dimensions of data entry from the point of view of a project manager considering the acquisition of an OCR system are described in the chapter. The major applications are categorized according to the type of data to be converted to computer-readable form and optical scanners are described. The preprocessing necessary before the actual character classification can take place is discussed in the chapter. It outlines the classical decision-theoretic formulation of the character classification problem. The various statistical approximations to the optimal classifier, including dimensionality reduction, feature extraction, and feature selection is discussed with references to the appropriate statistical techniques. The importance of accurate estimation of the error and reject rates are discussed in the chapter and a fundamental relation between the error rate and the reject rate in optimal systems are described in the chapter.

44 citations


Proceedings ArticleDOI
03 May 1982
TL;DR: Ohlander et al. as discussed by the authors discussed issues of recursive region segmentation in the context of PHOENIX, the newest version of region segmentations, running on a VAX 11/780 under UNIX.
Abstract: Recursive segmentation of an image into regions using histograms is one of the most widely used techniques for image segmentation. At CMU, several versions of a region segmentation program have been developed based on this technique (Ohlander, Price, Shafer and Kanade). Based on these experiences, this paper discusses issues of recursive region segmentation in the context of PHOENIX, the newest version of region segmentation program, running on a VAX 11/780 under UNIX. The issues discussed in this paper include: Image features to be used in histogramming; comparison of the algorithm with other techniques; important improvements made in PHOENIX over its predecessor (Ohlander and Price); and some inherent problems in histogram-based segmentation together with suggestions for minimizing them. PHOENIX is being incorporated into the ARPA Image Understanding Testbed, under construction at SRI International.

43 citations


Journal ArticleDOI
TL;DR: Following a description of preprocessing techniques, the various features found in the vast accumulation of literature on handprint recognition are divided into two main categories: (1) global analysis, and (2) structural analysis.

42 citations


Journal ArticleDOI
TL;DR: A technique for locating desired structures utilizing user specified information about properties of these structures and their relationships with other more easily extracted objects is described and results of the processing of aerial pictures are presented.
Abstract: A technique for locating desired structures utilizing user specified information about properties of these structures and their relationships with other more easily extracted objects is described. An edge-based and region-based technique is used for scene segmentation. Experimental results of the processing of aerial pictures are presented.

Journal ArticleDOI
TL;DR: This work presents a new method of calculating the F–K basis functions for large dimensional imagery by using a small digital computer, when the intraclass variation can be approximated by correlation matrices of low rank.
Abstract: The Fukunaga–Koontz (F–K) transform is a linear transformation that performs image-feature extraction for a two-class image classification problem. It has the property that the most important basis functions for representing one class of image data (in a least-squares sense) are also the least important for representing a second image class. We present a new method of calculating the F–K basis functions for large dimensional imagery by using a small digital computer, when the intraclass variation can be approximated by correlation matrices of low rank. Having calculated the F–K basis functions, we use a coherent optical processor to obtain the coefficients of the F–K transform in parallel. Finally, these coefficients are detected electronically, and a classification is performed by the small digital computer.


Journal ArticleDOI
TL;DR: Application of this technique to the classification of wide bandwidth radar return signatures is presented and computer simulations proved successful and are also discussed.
Abstract: A technique is presented for feature extraction of a waveform y based on its Tauberian approximation, that is, on the approximation of y by a linear combination of appropriately delayed versions of a single basis function x, i.e., y(t) = ?M i = 1 aix(t - ?i), where the coefficients ai and the delays ?i are adjustable parameters. Considerations in the choice or design of the basis function x are given. The parameters ai and ?i, i=1, . . . , M, are retrieved by application of a suitably adapted version of Prony's method to the Fourier transform of the above approximation of y. A subset of the parameters ai and ?i, i = 1, . . . , M, is used to construct the feature vector, the value of which can be used in a classification algorithm. Application of this technique to the classification of wide bandwidth radar return signatures is presented. Computer simulations proved successful and are also discussed.

Proceedings ArticleDOI
22 Nov 1982
TL;DR: This paper overviews the use of parallel processing techniques for various vision tasks using a parallel processing computer architecture known as PASM (partitionable SIMD/MIMD machine).
Abstract: It has been estimated that processor speeds on the order of 1 to 100 billion operations per second will be required to solve some of the current problems in computer vision. This paper overviews the use of parallel processing techniques for various vision tasks using a parallel processing computer architecture known as PASM (partitionable SIMD/MIMD machine). PASM is a large-scale multimicroprocessor system being designed for image processing and pattern recognition. It can be dynamically reconfigured to operate as one or more independent SIMD (single instruction stream-multiple data stream) and/or MIMD (multiple instruction stream-multiple data stream) machines. This paper begins with a discussion of the computational capabilities required for com-puter vision. It is then shown how parallel processing, and in particular PASM, can be used to meet these needs.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: A new algorithm has been applied to extract features from unaveraged (single) EEG records, which consist of single evoked responses elicited from human subjects who read textual material presented in the form of propositions.
Abstract: Our research goal is to develop a new methodology for studying brain function using single, unaveraged EEG records. This investigation has led to a new algorithm for feature extraction for the case of small design (learning) sets. The algorithm has been applied to extract features from unaveraged (single) EEG records, which consist of single evoked responses elicited from human subjects who read textual material presented in the form of propositions. The subjects were instructed to make a binary decision concerning each proposition. This gave two possible data classes. We selected features from the evoked event-related potentials (ERP's), and designed a classifier to assign the ERP's for each proposition to one of the two possible classes.

Journal ArticleDOI
TL;DR: A systematic feature extraction procedure is proposed, based on successive extractions of features, using the Gaussian minus-log-likelihood ratio as a basis for the extracted features.
Abstract: A systematic feature extraction procedure is proposed. It is based on successive extractions of features. At each stage a dimensionality reduction is made and a new feature is extracted. A specific example is given using the Gaussian minus-log-likelihood ratio as a basis for the extracted features. This form has the advantage that if both classes are Gaussianly distributed, only a single feature, the sufficient statistic, is extracted. If the classes are not Gaussianly distributed, additional features are extracted in an effort to improve the classification performance. Two examples are presented to demonstrate the performance of the procedure.

Journal ArticleDOI
TL;DR: Figures of merit are defined for possible linking of pairs of edge segments that continue one another or are anti-parallel to each other based on both the geometrical configuration of the segments and the gray levels associated with them.

Journal ArticleDOI
TL;DR: A simple and effective technique for eliminating spurious features due to strong reflections from illuminated surfaces through an efficient border-following technique followed by linear scanning f the interiors is proposed.
Abstract: In the application of image processing techniques for feature selection in manufactured parts, the generation of spurious features due to strong reflections from illuminated surfaces (particularly metal parts) has presented problems in the proper analysis of features for quality control purposes. Such spurious features most frequently appear as bright regions on a dark background and when image thresholding is performed for the purpose of feature detection, such regions appear as genuine features of the surface, causing ambiguities in the feature detection. This paper proposes a simple and effective technique for eliminating such features through an efficient border-following technique followed by linear scanning f the interiors. The effectiveness and the usefulness of the technique is demonstrated by considering a piston (in particular the surface of the piston head) of the internal combustion engine of an automobile.

Journal ArticleDOI
TL;DR: In this paper, a pattern recognition technique was applied to recognize the cutting states for the difficult-to-cut materials, which is the first step for realization of the adaptive control in the cutting process.

Journal ArticleDOI
01 Jan 1982
TL;DR: The perturbation analyses done in this research verify the viability of using the parameters of a process model as a feature vector in a pattern recognition scheme.
Abstract: A method for the extraction of features for pattern recognition by system identification is presented. A test waveform is associated with a parameterized process model (PM) which is an inverse filter. The structure of the PM corresponds to the redundant information in a waveform, and the parameter values correspond to the discriminatory information. The PM used in this research is a linear predictive system whose parameters are the linear predictive coefficients (LPC's). This technique is applied to feature extraction of electrocardiograms (ECG's) for differential diagnosis. The LPC's are calculated for each ECG and used as a feature vector in a hypergeometric affine N-space spanned by the LPC's. The efficacy of this feature extraction technique is tested by three different perturbation methods, namely noise, matrix distortion, and a newly developed method called directed distortion. Both the Euclidean and Itakura distances between feature vectors in N-space are shown in increase with increasing perturbation of the template waveform. The monotonic behavior of a distance measure is a necessary attribute of a valid feature space. Thus the perturbation analyses done in this research verify the viability of using the parameters of a process model as a feature vector in a pattern recognition scheme.

Proceedings ArticleDOI
03 May 1982
TL;DR: An automatic speech recognition system is presented which starts from a demisyllable segmentation of the speech signal, based on a set of spectral and temporal acoustic features which are automatically extracted from LPC-spectra and assembled in one feature vector for each demisyLLable.
Abstract: An automatic speech recognition system is presented which starts from a demisyllable segmentation of the speech signal. Recognition of these segments is based on a set of spectral and temporal acoustic features which are automatically extracted from LPC-spectra and assembled in one feature vector for each demisyllable. The 24 components of this vector describe formants, formant loci, formant transitions, formant-like "links" for characterization of nasals, liquids or glides, the spectral distribution of fricative noise or bursts (turbulences), and duration of pauses. Preliminary recognition experiments were carried out with feature vectors extracted from a set of 360 German initial demisyllables which represent 45 consonant clusters combined with 8 vowels. When compared with template matching methods, the feature representations yield a drastic reduction in the number of components needed to represent each segment.

Journal ArticleDOI
TL;DR: The systematic feature extraction of a plane shape and the simple and essential description of that shape is given.

Journal ArticleDOI
TL;DR: A set of preprocessing algorithms are described which are designed to register two images of TV-type video data in real time and illustrate the importance of an efficient global feature extraction hardware for image understanding applications.
Abstract: The application of a simulated binary array processor (BAP) to the rapid analysis of a sequence of images has been studied. Several algorithms have been developed which may be implemented on many existing parallel processing machines. The characteristic operations of a BAP are discussed and analyzed. A set of preprocessing algorithms are described which are designed to register two images of TV-type video data in real time. These algorithms illustrate the potential uses of a BAP and their cost is analyzed in detail. The results of applying these algorithms to FLIR data and to noisy optical data are given. An analysis of these algorithms illustrates the importance of an efficient global feature extraction hardware for image understanding applications.

Patent
14 Dec 1982
TL;DR: This article proposed a character recognition system with an original feature extracting section for extracting a feature deliberately neglected in the process of pre-process conversion and recognition feature extraction, which is used for final recognition of a set of characters, thus preventing erroneous character recognition.
Abstract: The invention provides a character recognition apparatus having an original feature extracting section for extracting as an original feature a feature deliberately neglected in the processes of pre-process conversion and recognition feature extraction. The original feature extracted by the original feature extracting section is used for final recognition of a set of characters, thus preventing erroneous character recognition.

Proceedings ArticleDOI
01 May 1982
TL;DR: A versatile human-machine interface designed to achieve an important balance between human factors and flexibility is described, which incorporates attributes of keypads, tablets and 'mice' and possesses freedom of keyshape, size and function.
Abstract: This paper describes a versatile human-machine interface designed to achieve an important balance between human factors and flexibility. It incorporates attributes of keypads, tablets and 'mice' and possesses freedom of keyshape, size and function. The system uses a TV camera focused on a 'keyboard' area to generate electrical signals in response to optical inputs. Additional optical and electronic feature extraction result in an input device capable of processing discrete and continuous input simultaneously.

Journal ArticleDOI
TL;DR: It is suggested that difference mapping may reflect a general synergistic mechanism relating topographic mapping and columnar architecture, which reduces the problem of feature extraction and segmentation for depth and color opponent channels to a single “textural” mechanism.
Abstract: Columnar architecture is a well established organizational principle for a variety of cortical systems. If two topographically mapped receptor systems, which receive slightly different "views" of the same physical stimulus, are interlaced as "columns", then the difference map of the afferent inputs is coded within a spatial frequency channel of the resultant map. The difference map of the left and right retinal views of a three dimensensional scene contains cues for the binocular disparity of the objects in the scene. Physical objects which are located at a common distance from the observer will be represented by area's of difference mapping which possesss common cortical textural values. Thus, segmentation of the cortical representation of the visual scene by values of positional disparity may be accomplished by conventional monocular segmentation techniques, applied to the cortical representation. The difference map is carried by a spatial frequency modulation determined by the period of the columnar interlacing. Ocular dominance columns in human striate cortex suggest a spatial frequency carrier which is roughly equal to the inverse of Panum's area. Since the difference mapping is a global attribute of the cortical representation, and is not contingent on the existence of labeled single cell feature extractors, the difference mapping algorithm represents a distinct alternative to conventional single cell approaches to feature extraction. The difference mapping algorithm is briefly discussed in relation to other difference channels, such as color opponent segmentation and binocular orientation disparity. It is suggested that difference mapping may reflect a general synergistic mechanism relating topographic mapping and columnar architecture, which reduces the problem of feature extraction and segmentation for depth and color opponent channels to a single "textural" mechanism.

Journal ArticleDOI
TL;DR: This correspondence presents a procedure to recognize handprinted alphanumeric characters written on a graphic tablet using several statistical classifiers and a recursive learning procedure in the statistical classifier.
Abstract: This correspondence presents a procedure to recognize handprinted alphanumeric characters written on a graphic tablet. After preprocessing, the input character is segmented into a polygon using a simple segmentation procedure. A feature vector is formed by the parameters which describe the segments of the polygon. Classification is done in two steps, the first one based on structural information extracted from the feature vector and the second based on statistical decision rule using parameters of the segments. A recursive learning procedure is introduced in the statistical classifier. The evaluation includes the measurement of recognition rates using several statistical classifiers, the validity test on the hypothesis concerning the distribution of feature vectors and the possibility of further simplification using principal axis analysis. Databases were created and used for the evaluation.

Journal ArticleDOI
TL;DR: Feature extraction is considered as a mean-quare estimation of the Bayes risk vector and partitioning the distribution space into local subregions and performing a linear estimation in each subregion minimizes the mean-square error.
Abstract: Feature extraction is considered as a mean-quare estimation of the Bayes risk vector. The problem is simplified by partitioning the distribution space into local subregions and performing a linear estimation in each subregion. A modified clustering algorithm is used to fimd the partitioning which minimizes the mean-square error.

20 Mar 1982
TL;DR: In this paper, a variety of features including point-density data, texture, and edges, as well as existing cartographic knowledge, can be combined and organized through rules in order to more completely describe a point.
Abstract: : An inadequate concept of how corresponding points relate to one another on dissimilar images has a greater effect than exposure geometry or data collection on registration problems in stereo photogrammetry. Conventional correlation, or one of its relatives, is the measure of similarity used in all automated stereo correlation systems. Correlation, a measure of the linear dependence between two sets of data, is an inadequate measure when there is less than, or more than, a moderate amount of image structure at and around points selected for image matching. The existence of structure should be recognized and utilized in an appropriate manner for image matching. Similarly, the absence of structure should be recognized, and the surrounding imagery should be used to complete matches where it is possible. The concurrent determination of what a pixel is, as well as where it is, can alleviate much of the registration problem. A variety of features including point-density data, texture, and edges, as well as existing cartographic knowledge, can be combined and organized through rules in order to more completely describe a point. The overall throughput of the compilation process will be improved in both time and accuracy if those functions which tend to support one another are concurrently, rather than sequentially, performed. If the compilation process takes place in image space, then the image matching operation as well as the other feature extraction operations, can be ordered by the data processing manager to best suit the function of the process. (Author)