scispace - formally typeset
Search or ask a question

Showing papers by "Richard P. Lippmann published in 1993"


01 Jan 1993
TL;DR: LNKnet is a software package that provides access to more than 20 patternclassification, clustering, and featureselection algorithms, including the most important algorithms from the fields of neural networks, statistics, machine learning, and artificial intelligence.
Abstract: : Patterndassification and clustering algorithms are key components of modern information processing systems used to perform tasks such as speech and image recognition, printedcharacter recognition, medical diagnosis, fault detection, process control, and financial decision making. To simplifY the task of applying these types of algorithms in new application areas, we have developed LNKnet-a software package that provides access toinore than 20 patternclassification, clustering, and featureselection algorithms. Included are the most important algorithms from the fields of neural networks, statistics, machine learning, and artificial intelligence. The algorithms can be trained and tested on separate data or tested with automatic crossvalidation. LNKnet runs under the UNIX operating system and access to the different algorithms is provided through a graphical pointandclick user interface. Graphical outputs include twodimensional (2D) scatter and decisionregion plots and 1-D plots of data histograms, classifier outputs, and error rates during training. Parameters of trained classifiers are stored in files from which the parameters can be translated into source-code subroutines (written in the C programming language) that can then be embedded in a user application program. Lincoln Laboratory and other research laboratories have used LNKnet successfully for many diverse applications.

38 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: Two approaches to integrating neural network and hidden Markov model (HMM) algorithms into one hybrid wordspotter are being explored.
Abstract: Two approaches to integrating neural network and hidden Markov model (HMM) algorithms into one hybrid wordspotter are being explored. One approach uses neural network secondary testing to analyze putative hits produced by a high-performance HMM wordspotter. This has provided consistent but small reductions in the number of false alarms required to obtain a given detection rate. In one set of experiments using the NIST Road Rally database, secondary testing reduced the false alarm rate by an average of 16.4%. A second approach uses radial basis function (RBF) neural networks to produce local machine scores for a Viterbi decoder. Network weights and RBF centers are trained at the word level to produce a high score for the correct keyword hits and a low score for false alarms generated by nonkeyword speech. Preliminary experiments using this approach are exploring a constructive approach which adds RBF centers to model nonkeyword near-misses and a cost function which attempts to maximize directly average detection accuracy over a specified range of false alarm rates. >

25 citations


Proceedings Article
29 Nov 1993
TL;DR: A new approach to training spotters is presented which computes the Fom gradient for each input pattern and then directly maximizes the FOM using backpropagation, which eliminates the need for thresholds during training.
Abstract: Spotting tasks require detection of target patterns from a background of richly varied non-target inputs. The performance measure of interest for these tasks, called the figure of merit (FOM), is the detection rate for target patterns when the false alarm rate is in an acceptable range. A new approach to training spotters is presented which computes the FOM gradient for each input pattern and then directly maximizes the FOM using backpropagation. This eliminates the need for thresholds during training. It also uses network resources to model Bayesian a posteriori probability functions accurately only for patterns which have a significant effect on the detection accuracy over the false alarm rate of interest. FOM training increased detection accuracy by 5 percentage points for a hybrid radial basis function (RBF) - hidden Markov model (HMM) wordspotter on the credit-card speech corpus.

15 citations