scispace - formally typeset
Search or ask a question

Showing papers on "Artificial neural network published in 1993"


Journal ArticleDOI
01 May 1993
TL;DR: The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference System implemented in the framework of adaptive networks.
Abstract: The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In the simulation, the ANFIS architecture is employed to model nonlinear functions, identify nonlinear components on-line in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificial neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. >

15,085 citations


Book ChapterDOI
TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Abstract: Publisher Summary This chapter provides an account of different neural network architectures for pattern recognition. A neural network consists of several simple processing elements called neurons. Each neuron is connected to some other neurons and possibly to the input nodes. Neural networks provide a simple computing paradigm to perform complex recognition tasks in real time. The chapter categorizes neural networks into three types: single-layer networks, multilayer feedforward networks, and feedback networks. It discusses the gradient descent and the relaxation method as the two underlying mathematical themes for deriving learning algorithms. A lot of research activity is centered on learning algorithms because of their fundamental importance in neural networks. The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. It closes with the discussion of performance and implementation issues.

13,033 citations


Proceedings Article
Jane Bromley1, Isabelle Guyon1, Yann LeCun1, E. Sackinger1, Roopak Shah1 
29 Nov 1993
TL;DR: An algorithm for verification of signatures written on a pen-input tablet based on a novel, artificial neural network called a "Siamese" neural network, which consists of two identical sub-networks joined at their outputs.
Abstract: This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a "Siamese" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.

2,980 citations


Journal ArticleDOI
TL;DR: An object recognition system based on the dynamic link architecture, an extension to classical artificial neural networks (ANNs), is presented and the implementation on a transputer network achieved recognition of human faces and office objects from gray-level camera images.
Abstract: An object recognition system based on the dynamic link architecture, an extension to classical artificial neural networks (ANNs), is presented. The dynamic link architecture exploits correlations in the fine-scale temporal structure of cellular signals to group neurons dynamically into higher-order entities. These entities represent a rich structure and can code for high-level objects. To demonstrate the capabilities of the dynamic link architecture, a program was implemented that can recognize human faces and other objects from video images. Memorized objects are represented by sparse graphs, whose vertices are labeled by a multiresolution description in terms of a local power spectrum, and whose edges are labeled by geometrical distance vectors. Object recognition can be formulated as elastic graph matching, which is performed here by stochastic optimization of a matching cost function. The implementation on a transputer network achieved recognition of human faces and office objects from gray-level camera images. The performance of the program is evaluated by a statistical analysis of recognition results from a portrait gallery comprising images of 87 persons. >

1,973 citations


Book
01 Jan 1993

1,921 citations


Journal ArticleDOI
TL;DR: A novel approach to the control of a multifunction prosthesis based on the classification of myoelectric patterns is described, which increases the number of functions which can be controlled by a single channel of myOElectric signal but does so in a way which does not increase the effort required by the amputee.
Abstract: A novel approach to the control of a multifunction prosthesis based on the classification of myoelectric patterns is described. It is shown that the myoelectric signal exhibits a deterministic structure during the initial phase of a muscle contraction. Features are extracted from several time segments of the myoelectric signal to preserve pattern structure. These features are then classified using an artificial neural network. The control signals are derived from natural contraction patterns which can be produced reliably with little subject training. The new control scheme increases the number of functions which can be controlled by a single channel of myoelectric signal but does so in a way which does not increase the effort required by the amputee. Results are presented to support this approach. >

1,898 citations


Book ChapterDOI
01 Aug 1993
TL;DR: An expectation-maximization (EM) algorithm for adjusting the parameters of the tree-structured architecture for supervised learning is presented and an online learning algorithm in which the parameters are updated incrementally is developed.
Abstract: We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIMs). Learning is treated as a maximum likelihood problem; in particular, we present an expectation-maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an online learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.

1,689 citations


Book
14 Apr 1993
TL;DR: This paper presents a meta-modelling framework for evaluating the performance of Neural Networks using the NEURAL Program, which automates the very labor-intensive and therefore time-heavy and expensive process of unsupervised training.
Abstract: Foundations. Classification. Autoassociation. Time Series Prediction. Function Approximation. Multilayer Feedforward Networks. Eluding Local Minimai: Simulated Annealing. Eluding Local Minima II: Genetic Optimisation. Regression and Neural Networks. Designing Feedforward Network Architectures. Interpreting Weights: How Does This Thing Work? Probalistic Neural Networks. Functional Link Networks. Hybrid Networks. Designing the Training Set. Preparing Input Data. Fuzzy Data and Processing. Unsupervised Training. Evaluating Performance of Neural Networks. Hybrid Networks. Designing the Training Set. Preparing Input Data. Fuzzy Data and Processing. Unsupervised Training. Evaluating Performance of Neural Networks. Confidence Measures. Optimizing the Decision Threshold. Using the NEURAL Program. Appendix. Bibliography. Index.

1,671 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that most of the characterizations that were reported thus far in the literature are special cases of the following general result: a standard multilayer feedforward network with a locally bounded piecewise continuous activation function can approximate any continuous function to any degree of accuracy if and only if the network's activation function is not a polynomial.

1,581 citations


Book
01 Jan 1993
TL;DR: A guide to the fundamental mathematics of neurocomputing, a review of neural network models and an analysis of their associated algorithms, and state-of-the-art procedures to solve optimization problems are explained.
Abstract: From the Publisher: Artificial neural networks can be employed to solve a wide spectrum of problems in optimization, parallel computing, matrix algebra and signal processing. Taking a computational approach, this book explains how ANNs provide solutions in real time, and allow the visualization and development of new techniques and architectures. Features include a guide to the fundamental mathematics of neurocomputing, a review of neural network models and an analysis of their associated algorithms, and state-of-the-art procedures to solve optimization problems. Computer simulation programs MATLAB, TUTSIM and SPICE illustrate the validity and performance of the algorithms and architectures described. The authors encourage the reader to be creative in visualizing new approaches and detail how other specialized computer programs can evaluate performance. Each chapter concludes with a short bibliography. Illustrative worked examples, questions and problems assist self study. The authors' self-contained approach will appeal to a wide range of readers, including professional engineers working in computing, optimization, operational research, systems identification and control theory. Undergraduate and postgraduate students in computer science, electrical and electronic engineering will also find this text invaluable. In particular, the text will be ideal to supplement courses in circuit analysis and design, adaptive systems, control systems, signal processing and parallel computing.

1,522 citations


Journal ArticleDOI
TL;DR: It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamicsof Brownian particles moving in a potential determined by a data point density.
Abstract: A neural network algorithm based on a soft-max adaptation rule is presented. This algorithm exhibits good performance in reaching the optimum minimization of a cost function for vector quantization data compression. The soft-max rule employed is an extension of the standard K-means clustering procedure and takes into account a neighborhood ranking of the reference (weight) vectors. It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamics of Brownian particles moving in a potential determined by the data point density. The network is used to represent the attractor of the Mackey-Glass equation and to predict the Mackey-Glass time series, with additional local linear mappings for generating output values. The results obtained for the time-series prediction compare favorably with the results achieved by backpropagation and radial basis function networks. >

Book
01 Oct 1993
TL;DR: Connectionist Speech Recognition: A Hybrid Approach describes the theory and implementation of a method to incorporate neural network approaches into state-of-the-art continuous speech recognition systems based on Hidden Markov Models (HMMs) to improve their performance.
Abstract: From the Publisher: Connectionist Speech Recognition: A Hybrid Approach describes the theory and implementation of a method to incorporate neural network approaches into state-of-the-art continuous speech recognition systems based on Hidden Markov Models (HMMs) to improve their performance. In this framework, neural networks (and in particular, multilayer perceptrons or MLPs) have been restricted to well-defined subtasks of the whole system, i.e., HMM emission probability estimation and feature extraction. The book describes a successful five year international collaboration between the authors. The lessons learned form a case study that demonstrates how hybrid systems can be developed to combine neural networks with more traditional statistical approaches. The book illustrates both the advantages and limitations of neural networks in the framework of a statistical system. Using standard databases and comparing with some conventional approaches, it is shown that MLP probability estimation can improve recognition performance. Other approaches are discussed, though there is no such unequivocal experimental result for these methods. Connectionist Speech Recognition: A Hybrid Approach is of use to anyone intending to use neural networks for speech recognition or within the framework provided by an existing successful statistical approach. This includes research and development groups working in the field of speech recognition, both with standard and neural network approaches, as well as other pattern recognition and/or neural network researchers. This book is also suitable as a text for advanced courses on neural networks or speech processing.

Journal ArticleDOI
TL;DR: In this article, a Siamese time delay neural network is used to measure the similarity between pairs of signatures, and the output of this half network is the feature vector for the input signature.
Abstract: This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory.

Journal ArticleDOI
TL;DR: Theoretical results concerning the capabilities and limitations of various neural network models are summarized, and some of their extensions are discussed.
Abstract: Theoretical results concerning the capabilities and limitations of various neural network models are summarized, and some of their extensions are discussed. The network models considered are divided into two basic categories: static networks and dynamic networks. Unlike static networks, dynamic networks have memory. They fall into three groups: networks with feedforward dynamics, networks with output feedback, and networks with state feedback, which are emphasized in this work. Most of the networks discussed are trained using supervised learning. >

Proceedings ArticleDOI
01 Aug 1993
TL;DR: A method of computing the derivatives of the expected squared error and of the amount of information in the noisy weights in a network that contains a layer of non-linear hidden units without time-consuming Monte Carlo simulations is described.
Abstract: Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases. So during learning, it is important to keep the weights simple by penalizing the amount of information they contain. The amount of information in a weight can be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the trade-o between the expected squared error of the network and the amount of information in the weights. We describe a method of computing the derivatives of the expected squared error and of the amount of information in the noisy weights in a network that contains a layer of non-linear hidden units. Provided the output units are linear, the exact derivatives can be computed e ciently without time-consuming Monte Carlo simulations. The idea of minimizing the amount of information that is required to communicate the weights of a neural network leads to a number of interesting schemes for encoding the weights.

Journal ArticleDOI
TL;DR: This paper has reviewed, with somewhat variable coverage, the nine MR image segmentation techniques itemized in Table II; each has its merits and drawbacks.
Abstract: This paper has reviewed, with somewhat variable coverage, the nine MR image segmentation techniques itemized in Table II. A wide array of approaches have been discussed; each has its merits and drawbacks. We have also given pointers to other approaches not discussed in depth in this review. The methods reviewed fall roughly into four model groups: c-means, maximum likelihood, neural networks, and k-nearest neighbor rules. Both supervised and unsupervised schemes require human intervention to obtain clinically useful results in MR segmentation. Unsupervised techniques require somewhat less interaction on a per patient/image basis. Maximum likelihood techniques have had some success, but are very susceptible to the choice of training region, which may need to be chosen slice by slice for even one patient. Generally, techniques that must assume an underlying statistical distribution of the data (such as LML and UML) do not appear promising, since tissue regions of interest do not usually obey the distributional tendencies of probability density functions. The most promising supervised techniques reviewed seem to be FF/NN methods that allow hidden layers to be configured as examples are presented to the system. An example of a self-configuring network, FF/CC, was also discussed. The relatively simple k-nearest neighbor rule algorithms (hard and fuzzy) have also shown promise in the supervised category. Unsupervised techniques based upon fuzzy c-means clustering algorithms have also shown great promise in MR image segmentation. Several unsupervised connectionist techniques have recently been experimented with on MR images of the brain and have provided promising initial results. A pixel-intensity-based edge detection algorithm has recently been used to provide promising segmentations of the brain. This is also an unsupervised technique, older versions of which have been susceptible to oversegmenting the image because of the lack of clear boundaries between tissue types or finding uninteresting boundaries between slightly different types of the same tissue. To conclude, we offer some remarks about improving MR segmentation techniques. The better unsupervised techniques are too slow. Improving speed via parallelization and optimization will improve their competitiveness with, e.g., the k-nn rule, which is the fastest technique covered in this review. Another area for development is dynamic cluster validity. Unsupervised methods need better ways to specify and adjust c, the number of tissue classes found by the algorithm. Initialization is a third important area of research. Many of the schemes listed in Table II are sensitive to good initialization, both in terms of the parameters of the design, as well as operator selection of training data.(ABSTRACT TRUNCATED AT 400 WORDS)

Journal ArticleDOI
TL;DR: An optimized self-organizing map algorithm has been used to obtain protein topological (proteinotopic) maps and analysis of the proteinotopic map reveals that the network extracts the main secondary structure features even with the small number of examples used.
Abstract: An optimized self-organizing map algorithm has been used to obtain protein topological (proteinotopic) maps. A neural network is able to arrange a set of proteins depending on their ultraviolet circular dichroism spectra in a completely unsupervised learning process. Analysis of the proteinotopic map reveals that the network extracts the main secondary structure features even with the small number of examples used. Some methods to use the proteinotopic map for protein secondary structure prediction are tested showing a good performance in the 200-240 nm wavelength range that is likely to increase as new protein structures are known.

Journal ArticleDOI
TL;DR: It is shown that the radial basis function network has an identical structure to the optimal Bayesian symbol-decision equalizer solution and, therefore, can be employed to implement the Bayesian equalizer.
Abstract: The application of a radial basis function network to digital communications channel equalization is examined. It is shown that the radial basis function network has an identical structure to the optimal Bayesian symbol-decision equalizer solution and, therefore, can be employed to implement the Bayesian equalizer. The training of a radial basis function network to realize the Bayesian equalization solution can be achieved efficiently using a simple and robust supervised clustering algorithm. During data transmission a decision-directed version of the clustering algorithm enables the radial basis function network to track a slowly time-varying environment. Moreover, the clustering scheme provides an automatic compensation for nonlinear channel and equipment distortion. Computer simulations are included to illustrate the analytical results. >


Book
01 Jan 1993
TL;DR: This text and reference provides a systematic development of neural network learning algorithms from a computational perspective, coupled with an extensive exploration of Neural Network expert systems which shows how the power of neuralnetwork learning can be harnessed to generate expert systems automatically.
Abstract: Neural Network Learning and Expert Systems is the first book to present a unified and in-depth development of neural network learning algorithms and neural network expert systems. Especially suitable for students and researchers in computer science, engineering, and psychology, this text and reference provides a systematic development of neural network learning algorithms from a computational perspective, coupled with an extensive exploration of neural network expert systems which shows how the power of neural network learning can be harnessed to generate expert systems automatically. Features include a comprehensive treatment of the standard learning algorithms (with many proofs), along with much original research on algorithms and expert systems. Additional chapters explore constructive algorithms, introduce computational learning theory, and focus on expert system applications to noisy and redundant problems. For students there is a large collection of exercises, as well as a series of programming projects that lead to an extensive neural network software package. All of the neural network models examined can be implemented using standard programming languages on a microcomputer.

Journal ArticleDOI
TL;DR: Experimental results show that RPCL outperforms FSCL when used for unsupervised classification, for training a radial basis function (RBF) network, and for curve detection in digital images.
Abstract: It is shown that frequency sensitive competitive learning (FSCL), one version of the recently improved competitive learning (CL) algorithms, significantly deteriorates in performance when the number of units is inappropriately selected. An algorithm called rival penalized competitive learning (RPCL) is proposed. In this algorithm, not only is the winner unit modified to adapt to the input for each input, but its rival (the 2nd winner) is delearned by a smaller learning rate. RPCL can be regarded as an unsupervised extension of Kohonen's supervised LVQ2. RPCL has the ability to automatically allocate an appropriate number of units for an input data set. The experimental results show that RPCL outperforms FSCL when used for unsupervised classification, for training a radial basis function (RBF) network, and for curve detection in digital images. >

Journal ArticleDOI
H S Seung1, Haim Sompolinsky
TL;DR: It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic, and although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle.
Abstract: In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models.

Journal ArticleDOI
TL;DR: This article proposes and empirically evaluates a method for the final, and possibly most difficult, step of the refinement of existing knowledge and demonstrates that neural networks can be used to effectively refine symbolic knowledge.
Abstract: Neural networks, despite their empirically proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge must be inserted into a neural network. Second, the network must be refined. Third, the refined knowledge must be extracted from the network. We have previously described a method for the first step of this process. Standard neural learning techniques can accomplish the second step. In this article, we propose and empirically evaluate a method for the final, and possibly most difficult, step. Our method efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules 1) closely reproduce the accuracy of the network from which they are extracteds 2) are superior to the rules produced by methods that directly refine symbolic ruless 3) are superior to those produced by previous techniques for extracting rules from trained neural networkss and 4) are “human comprehensible.” Thus, this method demonstrates that neural networks can be used to effectively refine symbolic knowledge. Moreover, the rule-extraction technique developed herein contributes to the understanding of how symbolic and connectionist approaches to artificial intelligence can be profitably integrated.

Book
01 Aug 1993
TL;DR: Multiple Sensor System Applications, Benefits, and Atmospheric Attenuation Data Fusion Algorithms and Architectures Bayesian Inference Dempster-Shafer Algorithm Artificial Neural Networks Voting Fusion Fuzzy Logic and Neural Networks Passive Data Association Techniques for Unambiguous Location of Targets.
Abstract: Multiple Sensor System Applications, Benefits, and Atmospheric Attenuation Data Fusion Algorithms and Architectures Bayesian Inference Dempster-Shafer Algorithm Artificial Neural Networks Voting Fusion Fuzzy Logic and Neural Networks Passive Data Association Techniques for Unambiguous Location of Targets. Appendices: Planck Radiation Law and Radiative Transfer Voting Fusion With Nested Confidence Levels.

Book
01 Jan 1993
TL;DR: This self-study guide leads both students and professionals swiftly from introductory principles to practical application, and enables readers to apply neural networks to their problems, either with a commercial neural network package or with a self-made program.
Abstract: From the Publisher: This book gives chemists insight into the much discussed and often not fully understood concept of neural networks. The authors pinpoint the five most widely used neural networks and learning strategies, illustrating them with lucid examples. Numerous applications from diverse fields are used in the second part of the book to help the chemist gain a better understanding of neural networks. This self-study guide leads both students and professionals swiftly from introductory principles to practical application. It enables readers to apply neural networks to their problems, either with a commercial neural network package or with a self-made program.

Book ChapterDOI
01 Jan 1993

Journal ArticleDOI
TL;DR: A simple and effective method for finding good hinges is presented and it is shown that use of sums of hinge functions gives a powerful and efficient alternative to neural networks with computation times several orders of magnitude less than is obtained by fitting neural Networks with a comparable number of parameters.
Abstract: A hinge function y=h(x) consists of two hyperplanes continuously joined together at a hinge. In regression (prediction), classification (pattern recognition), and noiseless function approximation, use of sums of hinge functions gives a powerful and efficient alternative to neural networks with computation times several orders of magnitude less than is obtained by fitting neural networks with a comparable number of parameters. A simple and effective method for finding good hinges is presented. >

Journal ArticleDOI
TL;DR: The problem of optimal sequential learning is investigated, viewed as a problem of estimating an underlying function sequentially rather than estimating a set of parameters of the neural network, and a suboptimal solution to the sequential estimate is arrived at by a growing gaussian radial basis function (GaRBF) network.
Abstract: In this paper, we investigate the problem of optimal sequential learning, viewed as a problem of estimating an underlying function sequentially rather than estimating a set of parameters of the neural network. First, we arrive at a suboptimal solution to the sequential estimate that can be mapped by a growing gaussian radial basis function (GaRBF) network. This network adds hidden units for each observation. The function space approach in which the estimates are represented as vectors in a function space is used in developing a growth criterion to limit its growth. A simplification of the criterion leads to two joint criteria on the distance of the present pattern and the existing unit centers in the input space and on the approximation error of the network for the given observation to be satisfied together. This network is similar to the resource allocating network (RAN) (Platt 1991a) and hence RAN can be interpreted from a function space approach to sequential learning. Second, we present an enhancement to the RAN. The RAN either allocates a new unit based on the novelty of an observation or adapts the network parameters by the LMS algorithm. The function space interpretation of the RAN lends itself to an enhancement of the RAN in which the extended Kalman filter (EKF) algorithm is used in place of the LMS algorithm. The performance of the RAN and the enhanced network are compared in the experimental tasks of function approximation and time-series prediction demonstrating the superior performance of the enhanced network with fewer number of hidden units. The approach adopted here has led us toward the minimal network required for a sequential learning problem.

Journal ArticleDOI
TL;DR: In this article, a neural network test for neglected nonlinearity is proposed, which is based on the approximating ability of neural network modeling techniques recently developed by cognitive scientists and compared with the Keenan test, Tsay test, the White dynamic information matrix test, McLeod-Li test, and the Ramsey RESET test.

Journal ArticleDOI
TL;DR: A back-propagation neural network methodology has been applied to a sample of bankrupt and non-bankrupt firms and results indicate that this technique more accurately predicts bankruptcy than the logit model.