scispace - formally typeset
Search or ask a question
Author

Richard P. Lippmann

Bio: Richard P. Lippmann is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Artificial neural network & Intrusion detection system. The author has an hindex of 43, co-authored 92 publications receiving 21619 citations.


Papers
More filters
Proceedings ArticleDOI
17 Jun 1990
TL;DR: On a difficult artificial machine-vision task, genetic algorithms were able to create new features (polynomial functions of the original features) which dramatically reduced classification error rates.
Abstract: Genetic algorithms were used for feature selection and creation in two pattern-classification problems. On a machine-version inspection task, it was found that genetic algorithms performed no better than conventional approaches to feature selection but required much more computation. On a difficult artificial machine-vision task, genetic algorithms were able to create new features (polynomial functions of the original features) which dramatically reduced classification error rates. Neural network and nearest-neighbor classifiers were unable to provide such low error rates using only the original features

34 citations

28 Mar 2007
TL;DR: A system that automatically assembles a test suite for a C program to improve line coverage, and gives initial results for a prototype implementation of COMET, which dramatically narrowing the search over inputs necessary to expose new code.
Abstract: : We present COMET, a system that automatically assembles a test suite for a C program to improve line coverage, and give initial results for a prototype implementation. COMET works dynamically, running the program under a variety of instrumentations in a feedback loop that adds new inputs to an initial corpus with each iteration. One instrumentation in particular is crucial to the success of this approach: dynamic taint tracing. Inputs are labeled as tainted at the byte level and all read/write pairs in the program are augmented to track the flow of taint between memory objects. This allows COMET to determine from which bytes of which inputs the variables in conditions derive, thereby dramatically narrowing the search over inputs necessary to expose new code. On a test set of 13 example programs, COMET improves upon the level of coverage reached in random testing by an average of 23% relative, takes only about twice the time, and requires a tiny fraction of the number of inputs to do so.

31 citations

Proceedings ArticleDOI
11 Apr 1988
TL;DR: There has been a resurgence of interest in neutral net models composed of many simple interconnected processing elements operating in parallel, and a major emphasis is placed on relating these models to existing classification and clustering algorithms.
Abstract: There has been a resurgence of interest in neutral net models composed of many simple interconnected processing elements operating in parallel. The computational power of different neutral net models and the effectiveness of simple error correction training procedures have been demonstrated. Three important feed-forward models are described. Single- and multi-layer perceptrons which can be used for pattern classification are described, as well as Kohonen's feature map algorithm which can be used for clustering or as a vector quantizer. A major emphasis is placed on relating these models to existing classification and clustering algorithms. >

30 citations

Proceedings ArticleDOI
11 Apr 1988
TL;DR: The Viterbi net as mentioned in this paper is a neural network implementation of the hidden Markov models (HMMs) used very effectively in recognition systems based on Hidden Markov Models (HMM).
Abstract: Artificial neural networks are of interest because algorithms used in many speech recognizers can be implemented using highly parallel neural net architectures and because new parallel algorithms are being development that are inspired by biological nervous systems. Some neural net approaches are resented for the problem of static pattern classification and time alignment. For static pattern classification, multi-layer perceptron classifiers trained with back propagation can form arbitrary decision regions, are robust, and train rapidly for convex decision regions. For time alignment, the Viterbi net is a neural net implementation of the Viterbi decoder used very effectively in recognition systems based on hidden Markov models (HMMs). >

29 citations

Proceedings Article
01 Jan 1988
TL;DR: A network trained at a relatively high signal-to-noise (S/N) ratio and then used as a front end for a linear matched filter detector greatly reduced the probability of error.
Abstract: A nonlinearity is required before matched filtering in minimum error receivers when additive noise is present which is impulsive and highly non-Gaussian. Experiments were performed to determine whether the correct clipping nonlinearity could be provided by a single-input single-output multi-layer perceptron trained with back propagation. It was found that a multi-layer perceptron with one input and output node, 20 nodes in the first hidden layer, and 5 nodes in the second hidden layer could be trained to provide a clipping nonlinearity with fewer than 5,000 presentations of noiseless and corrupted waveform samples. A network trained at a relatively high signal-to-noise (S/N) ratio and then used as a front end for a linear matched filter detector greatly reduced the probability of error. The clipping nonlinearity formed by this network was similar to that used in current receivers designed for impulsive noise and provided similar substantial improvements in performance.

27 citations


Cited by
More filters
Journal ArticleDOI
Lawrence R. Rabiner1
01 Feb 1989
TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Abstract: This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. >

21,819 citations

Book
01 Jan 1995
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

19,056 citations

Book ChapterDOI
TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Abstract: Publisher Summary This chapter provides an account of different neural network architectures for pattern recognition. A neural network consists of several simple processing elements called neurons. Each neuron is connected to some other neurons and possibly to the input nodes. Neural networks provide a simple computing paradigm to perform complex recognition tasks in real time. The chapter categorizes neural networks into three types: single-layer networks, multilayer feedforward networks, and feedback networks. It discusses the gradient descent and the relaxation method as the two underlying mathematical themes for deriving learning algorithms. A lot of research activity is centered on learning algorithms because of their fundamental importance in neural networks. The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. It closes with the discussion of performance and implementation issues.

13,033 citations

Journal ArticleDOI
TL;DR: It is demonstrated that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube.
Abstract: In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.

12,286 citations

Journal ArticleDOI
TL;DR: It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.
Abstract: In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported.

7,290 citations