scispace - formally typeset
Search or ask a question

Showing papers on "Feature vector published in 1993"


Proceedings Article
Jane Bromley1, Isabelle Guyon1, Yann LeCun1, E. Sackinger1, Roopak Shah1 
29 Nov 1993
TL;DR: An algorithm for verification of signatures written on a pen-input tablet based on a novel, artificial neural network called a "Siamese" neural network, which consists of two identical sub-networks joined at their outputs.
Abstract: This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a "Siamese" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.

2,980 citations


Journal ArticleDOI
TL;DR: In this article, a Siamese time delay neural network is used to measure the similarity between pairs of signatures, and the output of this half network is the feature vector for the input signature.
Abstract: This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory.

1,297 citations


Journal ArticleDOI
TL;DR: The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors.
Abstract: A novel approach to feature extraction for classification based directly on the decision boundaries is proposed. It is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms. >

401 citations


PatentDOI
TL;DR: The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data and takes advantage of temporal context and estimate class probabilities conditioned on recent past history.

320 citations


Journal ArticleDOI
TL;DR: The authors use range profiles as the feature vectors for data representation, and they establish a decision rule based on the matching scores to identify aerospace objects, and the results demonstrated can be used for comparison with other identification methods.
Abstract: The authors use range profiles as the feature vectors for data representation, and they establish a decision rule based on the matching scores to identify aerospace objects. Reasons for choosing range profiles as the feature vectors are explained, and criteria for determining aspect increments for building the database are proposed. Typical experimental examples of the matching scores and recognition rates are provided and discussed. The results demonstrated can be used for comparison with other identification methods. >

274 citations


Patent
25 Aug 1993
TL;DR: In this article, a feature vector consisting of the highest order (most discriminatory) magnitude information from the power spectrum of the Fourier transform of the image is formed, and the output vector is subjected to statistical analysis to determine if a sufficiently high confidence level exists to indicate that a successful identification has been made.
Abstract: A method and apparatus under software control for pattern recognition utilizes a neural network implementation to recognize two dimensional input images which are sufficiently similar to a database of previously stored two dimensional images. Images are first image processed and subjected to a Fourier transform which yields a power spectrum. An in-class to out-of-class study is performed on a typical collection of images in order to determine the most discriminatory regions of the Fourier transform. A feature vector consisting of the highest order (most discriminatory) magnitude information from the power spectrum of the Fourier transform of the image is formed. Feature vectors are input to a neural network having preferably two hidden layers, input dimensionality of the number of elements in the feature vector and output dimensionality of the number of data elements stored in the database. Unique identifier numbers are preferably stored along with the feature vector. Application of a query feature vector to the neural network will result in an output vector. The output vector is subjected to statistical analysis to determine if a sufficiently high confidence level exists to indicate that a successful identification has been made. Where a successful identification has occurred, the unique identifier number may be displayed.

269 citations


Proceedings Article
01 Jun 1993
TL;DR: This work applies Genetic Programming to the development of a processing tree for the classification of features extracted from images: measurements from a set of input nodes are weighted and combined through linear and nonlinear operations to form an output response.
Abstract: We apply Genetic Programming (GP) to the development of a processing tree for the classification of features extracted from images: measurements from a set of input nodes are weighted and combined through linear and nonlinear operations to form an output response. No constraints are placed upon size, shape, or order of processing within the network. This network is used to classify feature vectors extracted from IR imagery into target/nontarget categories using a database of 2000 training samples. Performance is tested against a separate database of 7000 samples. This represents a significant scaling up from the problems to which GP has been applied to date. Two experiments are performed: in the first set, we input classical "statistical" image features and minimize misclassification of target and non-target samples. In the second set of experiments, GP is allowed to form it's own feature set from primitive intensity measurements. For purposes of comparison, the same training and test sets are used to train two other adaptive classifier systems, the binary tree classifier and the Backpropagation neural network. The GP network achieves higher performance with reduced computational requirements. The contributions of GP "schemata," or subtrees, to the performance of generated trees are examined. Genetic Programming for Feature Discovery and Image Discrimination 1

263 citations


Journal ArticleDOI
TL;DR: The nontraditional approach to the problem of estimating the parameters of a stochastic linear system is presented and it is shown how the evolution of the dynamics as a function of the segment length can be modeled using alternative assumptions.
Abstract: A nontraditional approach to the problem of estimating the parameters of a stochastic linear system is presented. The method is based on the expectation-maximization algorithm and can be considered as the continuous analog of the Baum-Welch estimation algorithm for hidden Markov models. The algorithm is used for training the parameters of a dynamical system model that is proposed for better representing the spectral dynamics of speech for recognition. It is assumed that the observed feature vectors of a phone segment are the output of a stochastic linear dynamical system, and it is shown how the evolution of the dynamics as a function of the segment length can be modeled using alternative assumptions. A phoneme classification task using the TIMIT database demonstrates that the approach is the first effective use of an explicit model for statistical dependence between frames of speech. >

238 citations


Proceedings ArticleDOI
E. Bocchieri1
27 Apr 1993
TL;DR: The author presents an efficient method for the computation of the likelihoods defined by weighted sums (mixtures) of Gaussians, which uses vector quantization of the input feature vector to identify a subset of Gaussian neighbors.
Abstract: In speech recognition systems based on continuous observation density hidden Markov models, the computation of the state likelihoods is an intensive task. The author presents an efficient method for the computation of the likelihoods defined by weighted sums (mixtures) of Gaussians. This method uses vector quantization of the input feature vector to identify a subset of Gaussian neighbors. It is shown that, under certain conditions, instead of computing the likelihoods of all the Gaussians, one needs to compute the likelihoods of only the Gaussian neighbours. Significant (up to a factor of nine) likelihood computation reductions have been obtained on various data bases, with only a small loss of recognition accuracy. >

185 citations


Journal ArticleDOI
TL;DR: An important conclusion about the present method is that the Foley-Sammon optimal set of discriminant vectors is a special case of the set of optimal discriminant projection vectors.

183 citations


PatentDOI
Masafumi Nishimura1, Masaaki Okochi1
TL;DR: Fenonic hidden Markov models for speech transformation candidates are combined with N-gram probabilities (where N is all integer greater than or equal to 2) to produce models of words.
Abstract: Analysis of a word input from a speech input device 1 for its features is made by a feature extractor 4 to obtain a feature vector sequence corresponding to said word, or to obtain a label sequence by applying a further transformation in a labeler 8. Fenonic hidden Markov models for speech transformation candidates are combined with N-gram probabilities (where N is all integer greater than or equal to 2) to produce models of words. The recognizer determines the probability that the speech model composed for each candidate word would output the label sequence or feature vector sequence input as speech, and outputs the candidate word corresponding to the speech model having the highest probability to a display 19.

Journal ArticleDOI
TL;DR: In particular, invariant parameters derived from the bispectrum are used to classify one-dimensional shapes, which is fast, suited for parallel implementation, and has high immunity to additive Gaussian noise.
Abstract: A new approach to pattern recognition using in- variant parameters based on higher order spectra is presented. In particular, invariant parameters derived from the bispec- trum are used to classify one-dimensional shapes. The bispec- trum, which is translation invariant, is integrated along straight lines passing through the origin in bifrequency space. The phase of the integrated bispectrum is shown to be scale and amplifi- cation invariant, as well. A minimal set of these invariants is selected as the feature vector for pattern classification, and a minimum distance classifier using a statistical distance measure is used to classify test patterns. The classification technique is shown to distinguish two similar, but different bolts given their one-dimensional profiles. Pattern recognition using higher or- der spectral invariants is fast, suited for parallel implementa- tion, and has high immunity to additive Gaussian noise. Sim- ulation results show very high classification accuracy, even for low signal-to-noise ratios.

Journal ArticleDOI
TL;DR: The classification of 3 common breast lesions, fibroadenomas, cysts, and cancers, was achieved using computerized image analysis of tumor shape in conjunction with patient age using a video camera and commercial frame grabber on a PC-based computer system.
Abstract: The classification of 3 common breast lesions, fibroadenomas, cysts, and cancers, was achieved using computerized image analysis of tumor shape in conjunction with patient age. The process involved the digitization of 69 mammographic images using a video camera and a commercial frame grabber on a PC-based computer system. An interactive segmentation procedure identified the tumor boundary using a thresholding technique which successfully segmented 57% of the lesions. Several features were chosen based on the gross and fine shape describing properties of the tumor boundaries as seen on the radiographs. Patient age was included as a significant feature in determining whether the tumor was a cyst, fibroadenoma, or cancer and was the only patient history information available for this study. The concept of a radial length measure provided a basis from which 6 of the 7 shape describing features were chosen, the seventh being tumor circularity. The feature selection process was accomplished using linear discriminant analysis and a Euclidean distance metric determined group membership. The effectiveness of the classification scheme was tested using both the apparent and the leaving-one-out test methods. The best results using the apparent test method resulted in correctly classifying 82% of the tumors segmented using the entire feature space and the highest classification rate using the leaving-one-out test method was 69% using a subset of the feature space. The results using only the shape descriptors, and excluding patient age resulted in correctly classifying 72% using the entire feature space (except age), and 51% using a subset of the feature space. >

Journal ArticleDOI
TL;DR: It is shown how such spatio-chromatic features can be extracted using multi-scaled filtering and correlation methods which capture the variations of colour over space in ways which encode important image features not extracted by techniques which separate colour, texture and shape into separate channels.

Journal ArticleDOI
TL;DR: It is shown that this method has better performance in terms of minimizing the number of classification errors than the squared error minimization method used in backpropagation.
Abstract: A pattern classification method called neural tree networks (NTNs) is presented. The NTN consists of neural networks connected in a tree architecture. The neural networks are used to recursively partition the feature space into subregions. Each terminal subregion is assigned a class label which depends on the training data routed to it by the neural networks. The NTN is grown by a learning algorithm, as opposed to multilayer perceptrons (MLPs), where the architecture must be specified before learning can begin. A heuristic learning algorithm based on minimizing the L1 norm of the error is used to grow the NTN. It is shown that this method has better performance in terms of minimizing the number of classification errors than the squared error minimization method used in backpropagation. An optimal pruning algorithm is given to enhance the generalization of the NTN. Simulation results are presented on Boolean function learning tasks and a speaker independent vowel recognition task. The NTN compares favorably to both neural networks and decision trees. >

Dissertation
01 Jan 1993
TL;DR: In this article, the authors describe a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment, including the use of desk-top microphones and different training and testing conditions.
Abstract: This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in different acoustical environments, and when a desk-top microphone (rather than a close-talking microphone) is used for speech input Without such processing, mismatches between training and testing conditions produce an unacceptable degradation in recognition accuracy Two kinds of environmental variability are introduced by the use of desk-top microphones and different training and testing conditions: additive noise and spectral tilt introduced by linear filtering An important attribute of the novel compensation algorithms described in this thesis is that they provide joint rather than independent compensation for these two types of degradation Acoustical compensation is applied in our algorithms as an additive correction in the cepstral domain This allows a higher degree of integration within SPHINX, the Carnegie Mellon speech recognition system, that uses the cepstrum as its feature vector Therefore, these algorithms can be implemented very efficiently Processing in many of these algorithms is based on instantaneous signal-to-noise ratio (SNR), as the appropriate compensation represents a form of noise suppression at low SNRs and spectral equalization at high SNRs The compensation vectors for additive noise and spectral transformations are estimated by minimizing the differences between speech feature vectors obtained from a “standard” training corpus of speech and feature vectors that represent the current acoustical environment In our work this is accomplished by a minimizing the distortion of vector-quantized cepstra that are produced by the feature extraction module in SPHINX In this dissertation we describe several algorithms including the SNR-Dependent Cepstral Normalization, (SDCN) and the Codeword-Dependent Cepstral Normalization (CDCN) With CDCN, the accuracy of SPHINX when trained on speech recorded with a close-talking microphone and tested on speech recorded with a desk-top microphone is essentially the same obtained when the system is trained and tested on speech from the desk-top microphone An algorithm for frequency normalization has also been proposed in which the parameter of the bilinear transformation that is used by the signal-processing stage to produce frequency warping is adjusted for each new speaker and acoustical environment The optimum value of this parameter is again chosen to minimize the vector-quantization distortion between the standard environment and the current one In preliminary studies, use of this frequency normalization produced a moderate additional decrease in the observed error rate

Journal ArticleDOI
TL;DR: Simulation results are presented, showing that initializing back propagation networks with prototypes generally results in drastic reductions in training time, improved robustness against local minima, and better generalization.

Patent
15 Oct 1993
TL;DR: In this paper, a Cartesian coordinate system is defined in the feature space so that one axis of the system passes through the two centroids, and a two dimensional feature space histogram of the two images is produced and a separate centroid is located in the histogram for each one of a pair of tissue types.
Abstract: A dual echo magnetic resonance imaging system produces two registered images of a patient in which the images have different contrast relationships between different tissue types. A two dimensional feature space histogram of the two images is produced and a separate centroid is located in the feature space histogram for each one of a pair of tissue types. A Cartesian coordinate system is defined in the feature space so that one axis of the system passes through the two centroids. Vector decomposition is employed to project each image element data point in the feature space onto a point on the one axis. The fractional quantity of each tissue type present in the image element is determined based upon the Euclidean distances from that axis point to the respective centroids. The fractional quantity is calculated for each element in the original images to form a pair of tissue images. The elements of a tissue image are processed to measure the amount of that tissue type in the imaged portion of the patient.

Journal ArticleDOI
TL;DR: This research suggests the application of monotonicity constraints to the back propagation learning algorithm to improve neural network performance and efficiency in classification applications where the feature vector is related monotonically to the pattern vector.
Abstract: Neural network techniques are widely used in solving pattern recognition or classification problems. However, when statistical data are used in supervised training of a neural network employing the back-propagation least mean square algorithm, the behavior of the classification boundary during training is often unpredictable. This research suggests the application of monotonicity constraints to the back propagation learning algorithm. When the training sample set is preprocessed by a linear classification function, neural network performance and efficiency can be improved in classification applications where the feature vector is related monotonically to the pattern vector. Since most classification problems in business possess monotonic properties, this technique is useful in those problems where any assumptions about the properties of the data are inappropriate.

Proceedings ArticleDOI
20 Oct 1993
TL;DR: An adaptation of hidden Markov models (HMM) to automatic recognition of unrestricted handwritten words and many interesting details of a 50,000 vocabulary recognition system for US city names are described.
Abstract: The paper describes an adaptation of hidden Markov models (HMM) to automatic recognition of unrestricted handwritten words. Many interesting details of a 50,000 vocabulary recognition system for US city names are described. This system includes feature extraction, classification, estimation of model parameters, and word recognition. The feature extraction module transforms a binary image to a sequence of feature vectors. The classification module consists of a transformation based on linear discriminant analysis and Gaussian soft-decision vector quantizers which transform feature vectors into sets of symbols and associated likelihoods. Symbols and likelihoods form the input to both HMM training and recognition. HMM training performed in several successive steps requires only a small amount of gestalt labeled data on the level of characters for initialization. HMM recognition based on the Viterbi algorithm runs on subsets of the whole vocabulary. >

Journal ArticleDOI
TL;DR: A novel approach is proposed, based on construction of a Voronoi diagram over the set of points representing patterns in feature space, which finds added usefulness in deriving alternate neural network structures for realizing the desired pattern classification.
Abstract: A novel approach is proposed which determines the number of layers, the number of neurons in each layer, and their connection weights for a particular implementation of a neural network, with the multilayer feedforward topology, designed to classify patterns in the multidimensional feature space. The approach is based on construction of a Voronoi diagram over the set of points representing patterns in feature space and this finds added usefulness in deriving alternate neural network structures for realizing the desired pattern classification. >

Journal ArticleDOI
TL;DR: A heuristic solution is proposed to lessen the inherent ambiguity of the principal axes method for object orientation and show excellent percentage classification success rates by using only a small number of normalized moments as elements of the feature vector.

Journal ArticleDOI
TL;DR: This paper introduces a new extension of the finite-dimensional spline-based approach for incorporating edge information, and derives explicit formulas for these edge warps, evaluates the quadratic form expressing bending energies of their formal combinations, and shows the resulting spectrum of edge features in typical scenes.
Abstract: In many current medical applications of image analysis, objects are detected and delimited by boundary curves or surfaces. Yet the most effective multivariate statistics available pertain to labeled points (landmarks) only. In the finite-dimensional feature space that landmarks support, each case of a data set is equivalent to a deformation map deriving it from the average form. This paper introduces a new extension of the finite-dimensional spline-based approach for incorporating edge information. In this implementation edgels are restricted to landmark loci: they are interpreted as pairs of landmarks at infinitesimal separation in a specific direction. The effect of changing edge direction is a singular perturbation of the thin-plate spline for the landmarks alone. An appropriate normalization yields a basis for image deformations corresponding to changes of edge direction without landmark movement; this basis complements the basis of landmark deformations ignoring edge information. We derive explicit formulas for these edge warps, evaluate the quadratic form expressing bending energies of their formal combinations, and show the resulting spectrum of edge features in typical scenes. These expressions will aid all investigations into medical images that entail comparisons of anatomical scene analyses to a normative or typical form.

Journal ArticleDOI
TL;DR: A scale-space clustering algorithm, more powerful than existing ones, may be useful for remote sensing for land use because it can cluster data in any multidimensional space and its insensitive to variability in cluster densities, sizes and ellipsoidal shapes.
Abstract: The authors applied a scale-space clustering algorithm to the classification of a multispectral and polarimetric SAR image of an agricultural site. After the initial polarimetric and radiometric calibration and noise cancellation, a 12-dimensional feature vector for each pixel was extracted from the scattering matrix. The clustering algorithm partitioned a set of unlabeled feature vectors from 13 selected sites, each site corresponding to a distinct crop, into 13 clusters without any supervision. The cluster parameters were then used to classify the whole image. The classification map is much less noisy and more accurate than those obtained by hierarchical rules. Starting with every point as a cluster, the algorithm works by melting the system to produce a tree of clusters in the scale space. It can cluster data in any multidimensional space and its insensitive to variability in cluster densities, sizes and ellipsoidal shapes. This algorithm, more powerful than existing ones, may be useful for remote sensing for land use. >

Proceedings ArticleDOI
27 Apr 1993
TL;DR: The results of this study indicate that well-isolated familiar sounds can be recognized with high accuracy by applying standard statistical classification procedures to feature vectors derived from two-dimensional cepstral coefficients.
Abstract: The author describes a preliminary study designed to answer the question, 'How well can familiar environmental sounds be identified?' By familiar is meant sounds on which the recognition system has been previously trained. Environmental sounds are sounds generated by acoustic sources common in domestic, business, and out-of-doors environments. The results of this study indicate that well-isolated familiar sounds can be recognized with high accuracy by applying standard statistical classification procedures to feature vectors derived from two-dimensional cepstral coefficients. >

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A hidden-Markov-model (HMM)-based system for font-independent spotting of user-specified keywords in a scanned image is described, and applications of word-image spotting include information filtering in images from facsimile and copy machines, and information retrieval from text image databases.
Abstract: A hidden-Markov-model (HMM)-based system for font-independent spotting of user-specified keywords in a scanned image is described. Word bounding boxes of potential keywords are extracted from the image using a morphology-based preprocessor. Feature vectors based on the external shape and internal structure of the word are computed over vertical columns of pixels in a word bounding box. For each user-specified keyword, an HMM is created by concatenating appropriate context-dependent character HMMs. Nonkeywords are modeled using an HMM based on context-dependent subcharacter models. Keyword spotting is performed using a Viterbi search through the HMM network created by connecting the keyword and nonkeyword HMMs in parallel. Applications of word-image spotting include information filtering in images from facsimile and copy machines, and information retrieval from text image databases. >

Journal ArticleDOI
TL;DR: A generalization of anisotropic diffusion is suggested as a mechanism for making reliable and precise geometric measurements in the presence of blurring and noise and a distance measure is presented that results in a process for reliable and accurate detection of "creases" and "corners".
Abstract: Much of image processing and artificial vision has focused on the detection of edges—particularly, edges that are measured by the gradient magnitude. Higher order geometry can provide a richer variety of information about objects within images and can also yield useful measurements which are invariant to certain kinds of intensity transformations. However, analyzing higher order geometry can be difficult because of the sensitivity of higher order filters to noise. Low pass filters can alleviate the effects of high frequency noise but tend to distort the geometry in ways that make the resulting measurements less useful. This paper suggests a generalization of anisotropic diffusion as a mechanism for making reliable and precise geometric measurements in the presence of blurring and noise. This mechanism is a generalized form of edge-affected diffusion that applies to multi-valued functions. We pursue the interpretation of multi-valued descriptors as positions in a feature space and describe how this premise yields a natural form for a set of coupled anisotropic diffusion equations that depend on one′s choice of distance in the resulting feature space. The appropriate choice of distance allows one to measure areas of the image where the feature positions are changing rapidly and vary the conductance in the diffusion equation accordingly. These features can be the outputs of some multi-valued imaging device, or measurements made (via filters) on a single valued image. Feature spaces that consist of measurements made on single-valued images can reflect geometric properties of the local intensity surface. The anisotropic diffusion can be used to segment images into patches that share local geometric properties so that the boundaries of these patches are geometrically and visually interesting. The appropriate choice of distance in such feature spaces can yield meaningful geometric information. One such geometric feature space consists of the first-order derivatives. This paper presents a distance measure in this space that results in a process for reliable and accurate detection of "creases" and "corners." These ideas can be generalized to other features, including higher order derivatives. The appropriate choice of distance in such feature spaces could yield meaningful higher order geometric information.

Patent
29 Nov 1993
TL;DR: In this article, an electric neural network including a node having multipliers respectively receiving signals representing feature vector elements and signals representing weight vector elements to produce product signals, a summer to add the product signals with a bias signal and output a sum signal to a hard limit, the hard limit for outputting a preliminary output signal of polarity.
Abstract: An apparatus and methods characterized by an electric neural network including a node having multipliers respectively receiving signals representing feature vector elements and signals representing weight vector elements to produce product signals, a summer to add the product signals with a bias signal and output a sum signal to a hard limiter, the hard limiter for outputting a preliminary output signal of polarity. In response to the output signal of polarity, one of at least two logic branches is enabled. In response to such enabling, weight elements are assigned to a next weight vector to be used in subsequent processing by the one of the at least two logic branches until a label is to be produced.

Journal ArticleDOI
TL;DR: The problem of volume averaging in quantitating CSF, gray-matter, and white-matter fractions in the brain is solved using a three-compartment model and a simple graphical analysis of a multispectral MR feature space, and the application of this technique to patients with neurological disorders is anticipated.
Abstract: The problem of volume averaging in quantitating CSF, gray-matter, and white-matter fractions in the brain is solved using a three-compartment model and a simple graphical analysis of a multispectral MR feature space. Compartmentalization is achieved without the ambiguities of thresholding techniques or the need to assume that the underlying pixel probability distributions have a particular form. A 2D feature space is formed by double SE (proton density- and T2-weighted) MR data with image nonuniformity removed by a novel technique in which the brain itself serves as a uniformity reference. Compartments other than the basic three were rejected by the tailoring of limits in feature space. Phantom scans substantiate this approach, and the importance of the careful selection and standardization of pure tissue reference signals is demonstrated. Compartmental profiles from standardized subvolumes of three normal brains, based on a 3D (Talairach) coordinate system, demonstrate slice-by-slice detail; longitudinal studies confirm reproducibility. Compartmentalization may be described graphically and algebraically, complementing data displays in feature space and images of compartmentalized brain scans. These studies anticipate the application of our compartmentalization technique to patients with neurological disorders.

Journal ArticleDOI
TL;DR: The authors explore alternatives that reduce the number of network weights while maintaining geometric invariant properties for recognizing patterns in real-time processing applications by examining the properties of various feature spaces for higher-order neural networks (HONNs).
Abstract: The authors explore alternatives that reduce the number of network weights while maintaining geometric invariant properties for recognizing patterns in real-time processing applications. This study is limited to translation and rotation invariance. The primary interest is in examining the properties of various feature spaces for higher-order neural networks (HONNs), in correlated and uncorrelated noise, such as the effect of various types of input features, feature size and number of feature pixels, and effect of scene size. The robustness of HONN training is considered in terms of target detectability. The experimental setup consists of a 15*20 pixel scene possibly containing a 3*10 target. Each trial used 500 training scenes plus 500 testing scenes. Results indicate that HONNs yield similar geometric invariant target recognition properties to classical template matching. However, the HONNs require an order of magnitude less computer processing time compared with template matching. Results also indicate that HONNs could be considered for real-time target recognition applications. >