scispace - formally typeset
Search or ask a question
Author

Richard P. Lippmann

Bio: Richard P. Lippmann is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Artificial neural network & Intrusion detection system. The author has an hindex of 43, co-authored 92 publications receiving 21619 citations.


Papers
More filters
Proceedings ArticleDOI
01 Oct 1985
TL;DR: Analysis of routing and preemption algorithms developed for circuit-switched networks such as the Defense Switched Network indicated that the new routing algorithms provided reduced point-to-point blocking probabilities after damage without adding extra trunking.
Abstract: New routing and preemption algorithms were developed for circuit-switched networks such as the Defense Switched Network that include both broadcast satellite and point-to-point transmission media. Three classes of routing procedures were evaluated: (1) mixed-media routing with fixed routing tables and call processing rules that included crankback and remote earth-station querying, (2) adaptive mixed-media routing which adapted routing tables after network damage, and (3) precedence flooding which routed high-precedence calls using flooding techniques. A new preemption technique called guided preemption was also evaluated. When guided preemption is used, lower precedence calls to preempt are selected after examining the paths of all calls previously routed through a switch. Call paths are added to the call-setup-success common-channel-signalling (CCS) packet at the call destination and then read in and stored within each switch in the call path as this message travels back to the call source. Tools developed to evaluate algorithms included a steady-state network analysis program, a call-by-call simulator, and the EISN testbed network described in a companion paper by H.M. Heggestad. Analytic results with the simulator and the steady-state analysis program indicated that the new routing algorithms provided reduced point-to-point blocking probabilities after damage without adding extra trunking. Best performance was obtained with adaptive mixed-media routing and precedence flooding techniques. Guided preemption preempted fewer low-precedence calls than blind preemption as used in AUTOVON to complete the same number of high-precedence calls.

9 citations

Book ChapterDOI
01 Jan 1982
TL;DR: In most studies, intelligibility was measured only before and after training with an aid, and a control group of subjects trained only by a teacher was not included as mentioned in this paper, and training procedures used with this group were often limited to those procedures that could be used with the aid, or procedures were unspecified, or not well designed.
Abstract: Publisher Summary This chapter discusses the research development on speech training aids for the deaf. The majority of speech training aids that have been developed have consisted of displays of acoustic or articulatory characteristics of speech. These displays have been used primarily in an attempt to increase the effectiveness of teacher therapy. Theoretically, an accurate, consistent display should be extremely useful because it would allow a clinician to verify subjective judgment by objective measurements and ensure the accuracy of feedback provided to a student. However, it is difficult to assess the effectiveness of this role of speech training aids on the basis of past research. In most studies, intelligibility was measured only before and after training with an aid, and a control group of subjects trained only by a teacher was not included. In the studies that included a control group, training procedures used with this group were often limited to those procedures that could be used with the aid, or procedures were unspecified, or not well designed. This problem has been aggravated by the paucity of research on the effectiveness of speech training techniques used by teachers

9 citations

Dissertation
01 Jan 1995
TL;DR: This thesis addresses the problem of limited training data in pattern detection problems where a small number of target classes must be detected in a varied background and voice transformation techniques are used to generate more training examples that improve the robustness of the spotting system.
Abstract: This thesis addresses the problem of limited training data in pattern detection problems where a small number of target classes must be detected in a varied background. There is typically limited training data and limited knowledge about class distributions in this type of spotting problem and in this case a statistical pattern classifier can not accurately model class distributions. The domain of wordspotting is used to explore new approaches that improve spotting system performance with limited training data. First, a high performance, state-of-the-art whole-word based wordspotter is developed. Two complementary approaches are then introduced to help compensate for the lack of data. Figure of Merit training, a new type of discriminative training algorithm, modifies the spotting system parameters according to the metric used to evaluate wordspotting systems. The effectiveness of discriminative training approaches may be limited due to overtraining a classifier on insufficient training data. While the classifier's performance on the training data improves, the classifier's performance on unseen test data degrades. To alleviate this problem, voice transformation techniques are used to generate more training examples that improve the robustness of the spotting system. The wordspotter is trained and tested on the Switchboard credit-card database, a database of spontaneous conversations recorded over the telephone. The baseline wordspotter achieves a Figure of Merit of 62.5% on a testing set. With Figure of Merit training, the Figure of Merit improves to 65.8%. When Figure of Merit training and voice transformations are used together, the Figure of Merit improves to 71.9%. The final wordspotter system achieves a Figure of Merit of 64.2% on the National Institute of Standards and Technology (NIST) September 1992 official benchmark, surpassing the 1992 results from other whole-word based wordspotting systems. Thesis Co-Supervisor: Richard P. Lippmann Title: Senior Technical Staff Thesis Co-Supervisor: David H. Staelin Title: Professor of Electrical Engineering

9 citations

01 Jan 1998
TL;DR: A new low-complexity approach to intrusion detection called "bottleneck verification" was developed which can find novel attacks with low false alarm rates.
Abstract: A new low-complexity approach to intrusion detection called "bottleneck verification" was developed which can find novel attacks with low false alarm rates. Bottleneck verification is a general approach to intrusion detection designed specifically for systems where there are only a few legal "bottleneck" methods to transition to a higher privilege level and where it is relatively easy to determine when a user is at a higher level. The key concept is to detect 1) When legal bottleneck methods are used and 2) When a user is at a high privilege level. This approach detects an attack whenever a user performs operations at a high privilege level without using legal bottleneck methods to transition to that level. It can theoretically detect any novel attack which illegally transitions a user to a high privilege level without prior knowledge of the attack

8 citations

Proceedings ArticleDOI
23 Feb 1992
TL;DR: Two auditory front ends which emulate some aspects of the human auditory system were compared using a high performance isolated word Hidden Markov Model (HMM) speech recognizer.
Abstract: Two auditory front ends which emulate some aspects of the human auditory system were compared using a high performance isolated word Hidden Markov Model (HMM) speech recognizer. In these initial studies, auditory models from Seneff [2] and Ghitza [4] were compared using both clean speech and speech corrupted by speech-like "babble" noise. Preliminary results indicate that the auditory models reduce the error rate slightly, especially at intermediate and high noise levels.

8 citations


Cited by
More filters
Journal ArticleDOI
Lawrence R. Rabiner1
01 Feb 1989
TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Abstract: This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. >

21,819 citations

Book
01 Jan 1995
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

19,056 citations

Book ChapterDOI
TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Abstract: Publisher Summary This chapter provides an account of different neural network architectures for pattern recognition. A neural network consists of several simple processing elements called neurons. Each neuron is connected to some other neurons and possibly to the input nodes. Neural networks provide a simple computing paradigm to perform complex recognition tasks in real time. The chapter categorizes neural networks into three types: single-layer networks, multilayer feedforward networks, and feedback networks. It discusses the gradient descent and the relaxation method as the two underlying mathematical themes for deriving learning algorithms. A lot of research activity is centered on learning algorithms because of their fundamental importance in neural networks. The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. It closes with the discussion of performance and implementation issues.

13,033 citations

Journal ArticleDOI
TL;DR: It is demonstrated that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube.
Abstract: In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.

12,286 citations

Journal ArticleDOI
TL;DR: It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.
Abstract: In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported.

7,290 citations