scispace - formally typeset
Search or ask a question

Showing papers on "Feature (machine learning) published in 1995"


Book
01 Jan 1995
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

19,056 citations


Book
29 Dec 1995
TL;DR: This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules, as well as methods for training them and their applications to practical problems.
Abstract: This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules. In it, the authors emphasize a coherent presentation of the principal neural networks, methods for training them and their applications to practical problems. Features Extensive coverage of training methods for both feedforward networks (including multilayer and radial basis networks) and recurrent networks. In addition to conjugate gradient and Levenberg-Marquardt variations of the backpropagation algorithm, the text also covers Bayesian regularization and early stopping, which ensure the generalization ability of trained networks. Associative and competitive networks, including feature maps and learning vector quantization, are explained with simple building blocks. A chapter of practical training tips for function approximation, pattern recognition, clustering and prediction, along with five chapters presenting detailed real-world case studies. Detailed examples and numerous solved problems. Slides and comprehensive demonstration software can be downloaded from hagan.okstate.edu/nnd.html.

6,463 citations


Posted Content
TL;DR: The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated.
Abstract: We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The statistical modeling techniques introduced in this paper differ from those common to much of the natural language processing literature since there is no probabilistic finite state or push-down automaton on which the model is built. Our approach also differs from the techniques common to the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classification in natural language processing. Key words: random field, Kullback-Leibler divergence, iterative scaling, divergence geometry, maximum entropy, EM algorithm, statistical learning, clustering, word morphology, natural language processing

1,140 citations


Book ChapterDOI
26 Oct 1995
TL;DR: In this article, the authors show that the free behavior of a reasonably intelligent human can be understood as the product of a complex but finite and determinate set of laws, and that the depth of the explanation is striking.
Abstract: : Effort was directed toward showing that the techniques that have emerged for constructing sophisticated problem-solving programs also provide us with new, strong tools for constructing theories of human thinking. They allow us to merge the rigor and objectivity associated with behaviorism with the wealth of data and complex behavior associated with the gestalt movement. To this end their key feature is not that they provide a general framework for understanding problem-solving behavior (although they do that too), but that they finally reveal with great clarity that the free behavior of a reasonably intelligent human can be understood as the product of a complex but finite and determinate set of laws. Although we know this only for small fragments of behavior, the depth of the explanation is striking. (Author)

738 citations


Journal ArticleDOI
TL;DR: In this article, an architecture of locally excitatory, globally inhibitory oscillator networks is proposed and investigated both analytically and by computer simulation, where each oscillator corresponds to a standard relaxation oscillator with two time scales.

376 citations


Ron Kohavi1
01 Sep 1995
TL;DR: This doctoral dissertation concludes that repeated runs of five-fold cross-validation give a good tradeoff between bias and variance for the problem of model selection used in later chapters.
Abstract: In this doctoral dissertation, we study three basic problems in machine learning and two new hypothesis spaces with corresponding learning algorithms. The problems we investigate are: accuracy estimation, feature subset selection, and parameter tuning. The latter two problems are related and are studied under the wrapper approach. The hypothesis spaces we investigate are: decision tables with a default majority rule (DTMs) and oblivious read-once decision graphs (OODGs). For accuracy estimation, we investigate cross-validation and the~.632 bootstrap. We show examples where they fail and conduct a large scale study comparing them. We conclude that repeated runs of five-fold cross-validation give a good tradeoff between bias and variance for the problem of model selection used in later chapters. We define the wrapper approach and use it for feature subset selection and parameter tuning. We relate definitions of feature relevancy to the set of optimal features, which is defined with respect to both a concept and an induction algorithm. The wrapper approach requires a search space, operators, a search engine, and an evaluation function. We investigate all of them in detail and introduce compound operators for feature subset selection. Finally, we abstract the search problem into search with probabilistic estimates. We introduce decision tables with a default majority rule (DTMs) to test the conjecture that feature subset selection is a very powerful bias. The accuracy of induced DTMs is surprisingly powerful, and we concluded that this bias is extremely important for many real-world datasets. We show that the resulting decision tables are very small and can be succinctly displayed. We study properties of oblivious read-once decision graphs (OODGs) and show that they do not suffer from some inherent limitations of decision trees. We describe a a general framework for constructing OODGs bottom-up and specialize it using the wrapper approach. We show that the graphs produced are use less features than C4.5, the state-of-the-art decision tree induction algorithm, and are usually easier for humans to comprehend.

338 citations


Dissertation
16 Sep 1995
TL;DR: The development of a model-based noise compensation technique, Parallel Model Combination, to alter the parameters of a set of Hidden Markov Model (HMM) based acoustic models, so that they reeect speech spoken in a new acoustic environment is detailed.
Abstract: This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration, except where stated. It has not been submitted in whole or part for a degree at any other university. The length of this thesis including footnotes and appendices is approximately 36,000 words. i Summary This thesis details the development of a model-based noise compensation technique, Parallel Model Combination (PMC). The aim of PMC is to alter the parameters of a set of Hidden Markov Model (HMM) based acoustic models, so that they reeect speech spoken in a new acoustic environment. Diierences in the acoustic environment may result from additive noise, such as cars passing and fans; or convolutional noise, such as channel diierences between training and testing. Both these classes of noise have been found to seriously degrade speech recognition performance and both may be handled within the PMC framework. The eeect of noise on the clean speech distributions and associated parameters was investigated. The shape of the corrupted-speech distribution could become distinctly non-Gaussian, even when Gaussian speech and noise sources were used. This indicates that, to achieve good performance in additive noise, some exibility in the component alignments, or number, is required within a state. For the model parameters, additive noise was found to alter all the means and the variances of both the static and, where present, dynamic coeecients. This shifting of HMM parameters was observed in terms of both a distance measure, the average Kullback-Leibler number on a feature vector component level, and the eeect on word accuracy. For best performance in noise-corrupted environments, it is necessary to compensate all these parameters. Various methods for compensating the HMMs are described. These may be split into two classes. The rst, non-iterative PMC, assumes that the frame/state component alignment associated with the speech models and the clean speech data is unaltered by the addition of noise. This implies that the corrupted-speech distributions are approximately Gaussian, which is known to be false. However, this assumption allows rapid adaptation of the model parameters. The second class of PMC is iterative PMC, where only the frame/state alignment is assumed unaltered. By allowing the component alignment within a state to vary, it is possible to better model the corrupted-speech distribution. One implementation is described, Data-driven Parallel Model Combination (DPMC). A simple and eeective method of estimating the convolutional noise component in the …

320 citations


Journal ArticleDOI
TL;DR: A connectionist expert system model, based on a fuzzy version of the multilayer perceptron developed by the authors, is proposed, which infers the output class membership value(s) of an input pattern and also generates a measure of certainty expressing confidence in the decision.
Abstract: A connectionist expert system model, based on a fuzzy version of the multilayer perceptron developed by the authors, is proposed. It infers the output class membership value(s) of an input pattern and also generates a measure of certainty expressing confidence in the decision. The model is capable of querying the user for the more important input feature information, if and when required, in case of partial inputs. Justification for an inferred decision may be produced in rule form, when so desired by the user. The magnitudes of the connection weights of the trained neural network are utilized in every stage of the proposed inferencing procedure. The antecedent and consequent parts of the justificatory rules are provided in natural forms. The effectiveness of the algorithm is tested on the speech recognition problem, on some medical data and on artificially generated intractable (linearly nonseparable) pattern classes. >

204 citations


Journal ArticleDOI
TL;DR: In this article, four experiments were designed to test two predictions of prototype theory, i.e., when the defining (necessary) features of a concept are only partially matched by an instance, then characteristic (nonnecessary) feature of concepts can affect categorization, and the effect of changing a feature was greatest when other features were all positive, and so categorization probability was at a maximum.

172 citations


Journal ArticleDOI
TL;DR: In this paper, a heteroscedastic random utility model which allows for a flexible pattern of cross elasticities at the household level is explored, which enables the model to describe patterns of price sensitivity among competing brands which correspond to the competitive structure reflected in consideration sets.

158 citations


Journal ArticleDOI
TL;DR: Improvement on learning rule makes ART1.5-SSS a stable non-hierarchical cluster analyzer and feature extractor, even in a small sample size condition, and enables quick, automatic rule building in Kansei Engineering expert systems.

Journal ArticleDOI
TL;DR: The parametric pattern recognition (PPR) algorithm that facilitates automatic MUAP feature extraction and Artificial Neural Network (ANN) models are combined for providing an integrated system for the diagnosis of neuromuscular disorders.
Abstract: In previous years, several computer-aided quantitative motor unit action potential (MUAP) techniques were reported. It is now possible to add to these techniques the capability of automated medical diagnosis so that all data can be processed in an integrated environment. In this study, the parametric pattern recognition (PPR) algorithm that facilitates automatic MUAP feature extraction and Artificial Neural Network (ANN) models are combined for providing an integrated system for the diagnosis of neuromuscular disorders. Two paradigms of learning for training ANN models were investigated, supervised, and unsupervised. For supervised learning, the back-propagation algorithm and for unsupervised learning, the Kohonen's self-organizing feature maps algorithm were used. The diagnostic yield for models trained with both procedures was similar and on the order of 80%. However, back propagation models required considerably more computational effort compared to the Kohonen's self-organizing feature map models. Poorer diagnostic performance was obtained when the K-means nearest neighbor clustering algorithm was applied on the same set of data. >

Proceedings Article
J. Bala1, J. Huang1, H. Vafaie1, K. Dejong1, Harry Wechsler1 
20 Aug 1995
TL;DR: A hybrid learning methodology that integrates genetic algorithms (GAs) and decision tree learning (ID3) in order to evolve optimal subsets of discriminatory features for robust pattern classification is introduced.
Abstract: This paper introduces a hybrid learning methodology that integrates genetic algorithms (GAs) and decision tree learning (ID3) in order to evolve optimal subsets of discriminatory features for robust pattern classification. A GA is used to search the space of all possible subsets of a large set of candidate discrimination features. For a given feature subset, ID3 is invoked to produce a decision tree. The classification performance of the decision tree on unseen data is used as a measure of fitness for the given feature set, which, in turn, is used by the GA to evolve better feature sets. This GA-ID3 process iterates until a feature subset is found with satisfactory classification performance. Experimental results are presented which illustrate the feasibility of our approach on difficult problems involving recognizing visual concepts in satellite and facial image data. The results also show improved classification performance and reduced description complexity when compared against standard methods for feature selection.

Journal ArticleDOI
TL;DR: A brief discussion and review of methods used in automatic feature recognition like cell division, cavity volume, convex hull, laminae slicing and other miscellaneous techniques which include graph-based and hint-based feature recognition methods have been presented.

Journal ArticleDOI
TL;DR: The results show that fabric defects inspected by means of image recognition in accordance with the artificial neural network agree approximately with initial expectations.
Abstract: In this paper, we evaluate the efficiency and accuracy of a method of detecting fabric defects that have been classified into different categories by a neural network. Four kinds of fabric defects most likely to be found during weaving were learned by the network. Based on the principle of the back-propagation algorithm of learning rule, fabric defects could be detected and classified exactly. The method used for processing image feature extraction is a co-occurrence-based method, by which six feature parameters are obtained. All of them consist of contrast measurements, which involve three spatial displacements (i.e., 1, 12, 16) and four directions (0, 45, 90, 135 degrees) of fabric defects' images used for classification. The results show that fabric defects inspected by means of image recognition in accordance with the artificial neural network agree approximately with initial expectations.

Journal ArticleDOI
TL;DR: Artificial neural networks have proven to be an interesting and useful alternate processing strategy for automatic target recognition (ATR) and the relation of neural classifiers to Bayesian techniques is emphasized along with the more recent use of feature sequences to enhance classification.

Journal ArticleDOI
01 Sep 1995-Ecology
TL;DR: In this article, Akqakaya et al. present a response to the criticism that ratio-dependent predator-prey equations should be related to consumer/resource ratios, which is not the case in our model.
Abstract: We were asked to coordinate our response to criticisms by Abrams (1994), Gleeson (1994), and Sarnelle (1994) of our Special Feature on "ratio-dependent" predator-prey theory (Mattson and Berryman 1992). This has been no easy task, for the Special Feature contained a diversity of views and approaches to modeling predator-prey dynamics. For example, Berryman (1992) arrives at ratio-dependent predation through extension of the logistic equation, Arditi and Ginzburg (1989) through modification of the predator's functional response, and Gutierrez (1992) by combining the physiological process of energy allocation with random search for prey. Thus, although we may not use the same explicit form for the predator-prey equations, we agree on the underlying rationale, that changes in predator and prey numbers should be related to consumer/ resource ratios. Our response is presented in two papers. In the first, Akqakaya et al. (1995) responded to criticism of their ratio-dependent functional response equation. This is because detailed and specific criticism has been directed, almost entirely, at the Arditi-Ginzburg (1989) model. For example, Gleeson (1994) objects to the assumption that predators "divide up" the prey before beginning to feed, and to prey becoming "infinitely available" as predators become infinitely rare, but these objections do not apply to the other models. In this paper we try to explain and clarify the differences and commonalities in our separate approaches, because our critics seem confused on this issue. We then respond to the criticism that ratio-dependent the-

Journal ArticleDOI
TL;DR: This paper reviews the major developments in the field of feature-based modelling with particular emphasis on automatic feature recognition systems.
Abstract: Features capture the engineering significance of a part model and serve as an important support tool for integrated manufacturing. Feature-based design systems typically act as ‘interpreters’ between the CAD and the CAM activities. These systems can be classified broadly into human-assisted feature definition systems, automatic feature recognition systems and design by features systems. Researchers have come to realize that the best system architecture for a feature-based system would be a blend of the above-mentioned approaches. This paper reviews the major developments in the field of feature-based modelling with particular emphasis on automatic feature recognition systems. The approaches used for automatic feature recognition systems are systematically categorized and discussed. Automated feature recognition systems are broadly categorized into volume feature recognition systems and surface feature recognition systems, and the published research in each of these categories is critically discus...

PatentDOI
TL;DR: An instantaneous context switching speech recognition system is disclosed which enables a speech recognition application to be changed without loading new pattern matching data into the system.
Abstract: An instantaneous context switching speech recognition system is disclosed which enables a speech recognition application to be changed without loading new pattern matching data into the system. Selectable pointer maps are included in the memory of the system which selectively change the relationship between words and phonemes between a first application context and the pattern matching logic to a second application context and the pattern matching logic.

Patent
22 Jun 1995
TL;DR: In a signal pattern recognition apparatus, a plurality of feature transformation sections respectively transform an inputted signal pattern into vectors in feature spaces corresponding respectively to predetermined classes using a predetermined transformation parameter corresponding to each of the classes so as to emphasize a feature of each class as mentioned in this paper.
Abstract: In a signal pattern recognition apparatus, a plurality of feature transformation sections respectively transform an inputted signal pattern into vectors in a plurality of feature spaces corresponding respectively to predetermined classes using a predetermined transformation parameter corresponding to each of the classes so as to emphasize a feature of each of the classes, and a plurality of discriminant function sections respectively calculates a value of a discriminant function using a predetermined discriminant function representing a similarity measure of each of the classes for the transformed vectors in the plurality of feature spaces. Then, a selection section executes a signal pattern recognition process by selecting a class to which the inputted signal pattern belongs based on the calculated values of a plurality of discriminant functions corresponding respectively to the classes, and a training control section trains and sets a plurality of transformation parameters of the feature transformation process and a plurality of discriminant functions so that an error probability of the signal pattern recognition is minimized based on a predetermined training signal pattern.

Patent
12 Sep 1995
TL;DR: In this paper, an object recognition apparatus and method for real-time training and recognition/inspection of test objects is presented. But the method is not suitable for the real-world environment.
Abstract: An object recognition apparatus and method for real-time training and recognition/inspection of test objects. To train the system, digital features of an object are captured as sub-frames extracted from a data stream. The data is thresholded and digitized and used to produce an address representing the digital feature. The address is used to write a value into a memory. During recognition or inspection, extracting digital features from a test object, converting the digital features extracted from the test object into addresses, and using the addresses developed from the test object to address the memory to correlate whether the same memory locations are addressed determines whether the test object matches the reference object.

Journal ArticleDOI
TL;DR: This paper presents purely functional implementations of queues and double-ended queues (deques) requiring only O(1) time per operation in the worst case, with a strange feature of requiring some laziness – but not too much!
Abstract: We present purely functional implementations of queues and double-ended queues (deques) requiring only O(1) time per operation in the worst case. Our algorithms are considerably simpler than previous designs with the same bounds. The inspiration for our approach is the incremental behavior of certain functions on lazy lists. Capsule Review This paper presents another example of the ability to write programs in functional languages that satisfy our desire for clarity while satisfying our need for efficiency. In this case, the subject (often-studied) is the implementation of queues and dequeues that are functional and exhibit constant-time worst-case insertion and deletion operations. Although the problem has been solved previously, this paper presents the simplest algorithm so far. As the author notes, it has a strange feature of requiring some laziness – but not too much!

01 May 1995
TL;DR: In this paper, the authors present a technique for constructing random fields from a set of training samples, where each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data.
Abstract: We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classification in natural language processing.

Journal ArticleDOI
Eiichi Tanaka1
TL;DR: This paper describes several aspects of syntactic pattern recognition from various points of view including the relation between the set of patterns and grammars, the semantics of a language, the expressive power of a grammar, grammatical inference, and a comparison of computing costs between a syntactic method and a prototype matching method.

Journal ArticleDOI
TL;DR: The paper presents an efficient method for tone recognition of isolated Cantonese syllables using Suprasegmental feature parameters extracted from the voiced portion of a monosyllabic utterance and a three-layer feedforward neural network is used to classify these feature vectors.
Abstract: Tone identification is essential for the recognition of the Chinese language, specifically far Cantonese which is well known for being very rich in tones. The paper presents an efficient method for tone recognition of isolated Cantonese syllables. Suprasegmental feature parameters are extracted from the voiced portion of a monosyllabic utterance and a three-layer feedforward neural network is used to classify these feature vectors. Using a phonologically complete vocabulary of 234 distinct syllables, the recognition accuracy for single-speaker and multispeaker is given by 89.0% and 87.6% respectively. >

Book ChapterDOI
Marco Richeldi1, Mauro Rossotto1
25 Apr 1995
TL;DR: StatDisc is described, a statistical algorithm that supports supervised learning by performing class-driven discretization that provides a concise summarization of continuous attributes by investigating the data composition.
Abstract: Discretization is a pre-processing step of the learning task which offers cognitive benefits as well as computational ones. This paper describes StatDisc, a statistical algorithm that supports supervised learning by performing class-driven discretization. StatDisc provides a concise summarization of continuous attributes by investigating the data composition, i.e., by discovering intervals of the numeric attribute values wherein examples feature distribution of classes homogeneous and strongly contrasting with the distribution of other intervals. Experimental results from a variety of domains confirm that discretizing real attributes causes little loss of learning accuracy while offering large reduction in learning time.

Journal ArticleDOI
TL;DR: The effects of additive background noise on speech quality and recognition parameters are discussed, and a source generator based framework to address stress and noise is proposed.

Proceedings ArticleDOI
09 May 1995
TL;DR: A new set of speech feature representations for robust speech recognition in the presence of car noise is proposed, based on subband analysis of the speech signal, and the performances of the new feature representations are compared to mel scale cepstral coefficients.
Abstract: A new set of speech feature representations for robust speech recognition in the presence of car noise is proposed. These parameters are based on subband analysis of the speech signal. Line spectral frequency (LSF) representation of the linear prediction (LP) analysis in subbands and cepstral coefficients derived from subband analysis (SUBCEP) are introduced, and the performances of the new feature representations are compared to mel scale cepstral coefficients (MELCEP) in the presence of car noise. Subband analysis based parameters are observed to be more robust than the commonly employed MELCEP representations.

Proceedings ArticleDOI
31 Aug 1995
TL;DR: A preliminary study also confirms that a similar DBNN recognizer can effectively recognize palms, which could potentially offer a much more reliable biometric feature.
Abstract: This paper proposes a face/palm recognition system based on decision-based neural networks (DBNN). The face recognition system consists of three modules. First, the face detector finds the location of a human face in an image. The eye localizer determines the positions of both eyes in order to generate meaningful feature vectors. The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth. (Eye-glasses will be permissible.) Lastly, the third module is a face recognizer. The DBNN can be effectively applied to all the three modules. It adopts a hierarchical network structures with nonlinear basis functions and a competitive credit-assignment scheme. The paper demonstrates its successful application to face recognition applications on both the public (FERET) and in-house (SCR) databases. In terms of speed, given the extracted features, the training phase for 100-200 persons would take less than one hour on Sparc10. The whole recognition process (including eye localization, feature extraction, and classification using DBNN) may consume only a fraction of a second on Sparc10. Experiments on three different databases all demonstrated high recognition accuracies. A preliminary study also confirms that a similar DBNN recognizer can effectively recognize palms, which could potentially offer a much more reliable biometric feature.

Proceedings ArticleDOI
16 May 1995
TL;DR: A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence, which ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye.
Abstract: A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person's face is the detailed texture of each eye's iris: an estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees of freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person's iris in a real time video image is encoded into a compact sequence of multi scale quadrature 2D Gabor wavelet coefficients, whose most significant bits comprise a 256 byte "iris code." Statistical decision theory generates identification decisions from Exclusive OR comparisons of complete iris codes at the rate of 10,000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical "cross over" error rate of one in 131,000 when a decision criterion is adopted that would equalise the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about 10/sup 31/.