scispace - formally typeset
Search or ask a question

Showing papers on "Feature extraction published in 1974"


Journal ArticleDOI
TL;DR: A new fast algorithm is proposed which allows for a variable number of segments iniecewise approximation as a way of feature extraction, data compaction, and noise filtering of boundaries of regions of pictures and waveforms.
Abstract: Piecewise approximation is described as a way of feature extraction, data compaction, and noise filtering of boundaries of regions of pictures and waveforms. A new fast algorithm is proposed which allows for a variable number of segments. After an arbitrary initial choice, segments are split or merged in order to drive the error norm under a prespecified bound. Results of computer experiments with cell outlines and electrocardiograms are reported.

589 citations


Journal ArticleDOI
01 Jan 1974
TL;DR: An approach to the problem of signature verificzation that treats the signature as a two-dimensional image and uses the Hadamard transform of that image as a means of data reduction and feature selection is described.
Abstract: This paper describes an approach to the problem of signature verificzation that treats the signature as a two-dimensional image and uses the Hadamard transform of that image as a means of data reduction and feature selection. This approach does not depend on the language or alphabet used and is general enough to have applications in such areas as cloud pattern surveys, aerial reconnaisance, and human-face recognition.

67 citations


Journal ArticleDOI
01 Mar 1974
TL;DR: The classification results presented in this paper show the feasibility of the proposed pictorial pattern recognition system in effectively screening out the abnormal pictures without human intervention.
Abstract: It is generally a problem to select the appropriate preprocessing and feature-extraction technique in most pictorial pattern recognition applications so that an accurate classification is possible. In this paper a class of pictures of medical importance, namely, chest X-ray pictures, is used to test the proposed preprocessing and feature-extraction technique. The technique presented in this paper is applied only to chest X-ray images; however, the same technique could also be applied to a fairly broad class of picture patterns with only some minor modifications. The proposed preprocessing technique, which utilizes the local and global information of the picture patterns, is to extract the lung boundary. The lung field is then enclosed by a polygon which is the piecewise linear approximation of the lung boundary. The set of texture features, which are the average of some local property measures, is has then extracted in this approximated lung area. The proposed technique been tested on two sets of X-ray picture classes?one with abnormalities caused by a known disease and the others with abnormalities caused by some unkown effects in the lung region. The classification results presented in this paper show the feasibility of the proposed pictorial pattern recognition system in effectively screening out the abnormal pictures without human intervention.

53 citations


Journal ArticleDOI
01 Jul 1974
TL;DR: Extensive experimental results are given to show that classification of an unknown nonlinear system, with respect to basic structural properties, can be and accomplished with a very high probability of correct classification.
Abstract: A fundamental problem in system modeling and theory is the characterization of the structure of an unknown nonlinear stochastic system when only input-output measurements are available. A method of classifying nonlinear stochastic systems, using pattern recognition and a pattern vector constructed from the input-output data, is proposed for ten stated classes of low-order nonlinear systems. The method is capable of extension to additional classes of nonlinear systems. Extensive experimental results are given to show that classification of an unknown nonlinear system, with respect to basic structural properties, can be and accomplished with a very high probability of correct classification. Various applications of the classification procedure are given, particularly in the areas of systems modeling, self-organizing control systems, and learning control systems.

31 citations


Journal ArticleDOI
01 Jan 1974
TL;DR: A layer structured system suitable for pattern recognition which operates similar to the afferent nervous system of vertebrates and corresponds approximately to the human capability for this task is described.
Abstract: This paper describes a layer structured system suitable for pattern recognition which operates similar to the afferent nervous system of vertebrates. The ``system theory of homogeneous layers'' has been developed to describe signal transmission and signal processing between neuronal layers. Feature extraction in the sense of spatial filtering is performed by such a layered system with a few hierarchical stages. The last stage contains adaptive coupling which is adjusted by a learning process. The system has been simulated with a computer and parts of it with a coherent light arrangement. Its performance im recognizing handprinted characters (alphanumerics) is highly satisfactory and corresponds approximately to the human capability for this task.

16 citations


Journal ArticleDOI
TL;DR: A feature extraction method, inspired from principal component analysis, is applied to the information in a reliability data bank once transformed; the failure patterns and time-observations are displayed simultaneously for maintenance control and design review.

16 citations


Patent
22 Jul 1974
TL;DR: In this paper, a feature extraction and selection technique for the recognition of chararistics identified with man-made objects within a scene of natural terrain, wherein the frequency of occurrence of the features is plotted in the form of three-dimensional histograms which describe the features of manmade objects, such as straight edges and regular geometric shapes.
Abstract: A feature extraction and selection technique for the recognition of chararistics identified with man-made objects within a scene of natural terrain, wherein the frequency of occurrence of the features is plotted in the form of three-dimensional histograms which describe the features of man-made objects, such as straight edges and regular geometric shapes. Employing conventional pattern recognition techniques, these features are used to classify the imagery as man-made or non man-made.

13 citations


Journal ArticleDOI
01 Nov 1974
TL;DR: A novel technique utilizing a Gaussian point-to-line distance concept for calculation of "feature value" has been employed, and the recognition program extracts the twenty feature values and attempts to determine in which of the forty-nine character classes the unknown character belongs.
Abstract: Handprinted character recognition by computer is accomplished on forty-nine character classes with a high recognition rate (> 99.4 percent). The form of characters is constrained by requiring each character to be handprinted on a standard grid. The grid is composed of twenty line segments, each of which forms the basis for a feature, yielding twenty features to represent each character. A person printing these characters is not expected to remain precisely on the grid lines. The errors that do occur in following the grid lines are assumed to be normally distributed; therefore, each feature is based on a "longitudinal Gaussian-shaped surface." A page of constrained characters to be recognized is input to the computer using a television camera. Each character on the page is located, isolated from the other characters, and quantized into binary points. A novel technique utilizing a Gaussian point-to-line distance concept for calculation of "feature value" has been employed. The recognition program extracts the twenty feature values and attempts to determine in which of the forty-nine character classes the unknown character belongs. This decision is made based on these twenty features using a weighted minimum distance classifier. If only a marginal classification can be made, a second-level decision is used to increase the likelihood of correct classification. The second-level decision uses the most discriminating features of the two most likely character classes in order to increase the likelihood of correct classification. All character-dependent data are obtained through training techniques.

8 citations


Journal ArticleDOI
TL;DR: Much emphasis ought to be placed on the a priori specific problem oriented knowledge, gained through experience, which the man brings to the machine and wishes to share with it in a versatile but structured way.
Abstract: An interactive computer environment is one which attempts to facilitate the interplay between man and machine in pursuit of a goal defined by man Presumably, to be effective, this environment should allow the calculating speed, precision, and structured logical/iterative skill of the machine to serve the conceptual, intuitive, highly associative, and contexturally sensitive attributes of human mental function in the solution of problems Too often the match is obtuse, the goals vague, and the result frustration Much emphasis ought to be placed on the a priori specific problem oriented knowledge, gained through experience, which the man brings to the machine and wishes to share with it in a versatile but structured way

8 citations



Journal ArticleDOI
TL;DR: Quantisation, feature extraction, recognition logic and context correction are discussed, and the basic OCR system operation is described.
Abstract: The author discusses systems and their performance. The basic OCR system operation is described. Scanning methods are discussed at length. Quantisation, feature extraction, recognition logic and context correction are discussed.

Proceedings ArticleDOI
01 Jan 1974
TL;DR: A program for automatically extracting lung and heart features in the digitized image of posteroanterior (PA) view chest radiographs is described; results obtained indicate the program can locate the accurate boundary on all cases except infants.
Abstract: This paper describes a program for automatically extracting lung and heart features in the digitized image of posteroanterior (PA) view chest radiographs. A graph-directed analysis is used to guide the search for objects from the largest to the smallest in the radiograph. Global information is used to guide the analysis of the program. Consequently, only the points in a small range are searched and tested against local criteria to detect boundary points. The entire lung boundary is broken into four segments: upper inside boundary, lower inside boundary, boundary along the diaphragm and outside boundary. Slightly different global-local criteria for detecting the edge points along each segment have been developed and tested on 423 PA chest radiographs of patients of all ages. The results obtained indicate the program can locate the accurate boundary on all cases except infants. Twenty-seven measurements which describe the shape and size of the heart are extracted; these measurements are used for normal abnormal classification via a modified maximum likelihood classification algorithm.

Journal ArticleDOI
01 Jan 1974
TL;DR: A suboptimum method of linear feature selection in multiclass problem of classifying Japanese vowels based on an upper bound on the probability of error is presented.
Abstract: A suboptimum method of linear feature selection in multiclass problem is presented. The set of features is selected in sequential manner based on an upper bound on the probability of error. The proposed method is applied to a problem of classifying Japanese vowels. Computer simulation results are presented and discussed.


ReportDOI
30 Dec 1974
TL;DR: The present method extends the methods of feature extraction proposed by Fukunaga and Koontz and reduces to the orthogonal subspace method of Watanabe and Pakvasa.
Abstract: : An approach to feature extraction based on functions of the class correlation matrices is described. If linear functions of the correlation matrices are chosen, the present method extends the methods of feature extraction proposed by Fukunaga and Koontz. If certain types of non-linear functions are employed, the method reduces to the orthogonal subspace method of Watanabe and Pakvasa. Optimization of selected features through selection of appropriate functions is discussed briefly. Preliminary results of classification of radar signatures using the feature extraction methods described here are presented.

Journal ArticleDOI
TL;DR: A sequential feature extraction scheme is proposed for binary features, which is linear and near optimal, and performance bounds are developed for several design strategies.
Abstract: Numerous schemes are available for feature selection in a pattern recognition problem, but the feature extraction process is largely intuitive. A sequential feature extraction scheme is proposed for binary features. A decision function, which is linear and near optimal, is developed concurrently with each feature. Performance bounds are developed for several design strategies. Experimental results are given to illustrate the use of the scheme and the effectiveness of the performance bounds.

01 Jan 1974
TL;DR: Piecewise approximation is described as a way of feature extraction, datacompaction, and noise filtering ofboundaries of regions of pictures and waveforms and a new fast algorithm is proposed which allows forariable number of segments.
Abstract: Piecewise approximation isdescribed asa wayof feature extraction, datacompaction, andnoise filtering ofboundaries ofregions ofpictures andwaveforms. A newfast algorithm ispro- posedwhichallows foravariable numberofsegments. After an arbitrary initial choice, segments aresplit ormergedinorder to drive theerror normunderaprespecified bound. Results ofcomputer expenments withcell outlines andelectrocardiograms arereported.

Proceedings ArticleDOI
01 Jan 1974
TL;DR: A real time, on-line EEG analysis strategy is described which incorporates feature extracting algorithms derived from models of human EEG interpretation which has been implemented on a dedicated minicomputer.
Abstract: The extremely complex nature of the electroencephalogram (EEG), and the subtle, nonquantified methods of pattern recognition used by human interpreters have made EEG analysis resistant to automation. Attempts at pattern recognition using multivariate classification procedures have not produced generalizable results due to the inadequate degree and quality of feature extraction prior to classification.A real time, on-line EEG analysis strategy is described which incorporates feature extracting algorithms derived from models of human EEG interpretation. A system based upon this strategy has been implemented on a dedicated minicomputer. It includes: 1) spectral analysis using the Fast Fourier Transform (FFT) to produce continuous estimates of power and coherence; 2) parallel time domain analysis to detect the occurrence of sharp transient events of possible clinical significance; 3) continuous isometric display of spectral and transient functions; 4) spectral and time domain algorithms for the rejection of noncortical and instrumental artifact; 5) heuristics to isolate patterns and events of potential clinical significance; 6) interactive alteration of analysis and display parameters to facilitate manipulation of data from various experimental paradigms; 7) on-line feedback to alter, when necessary, artifact rejection, transient detection and feature extraction decision thresholds.