scispace - formally typeset
Search or ask a question

Showing papers on "Hybrid neural network published in 1992"


Journal ArticleDOI
TL;DR: In this article, a hybrid neural network-first principles modeling scheme is developed and used to model a fedbatch bioreactor, which combines a partial first principles model, which incorporates the available prior knowledge about the process being modeled, with a neural network which serves as an estimator of unmeasuredprocess parameters that are difficult to model from first principles.
Abstract: A hybrid neural network-first principles modeling scheme is developed and used to model a fedbatch bioreactor. The hybrid model combines a partial first principles model, which incorporates the available prior knowledge about the process being modeled, with a neural network which serves as an estimator of unmeasuredprocess parameters that are difficult to model from first principles. This hybrid model has better properties than standard “black-box” neural network models in that it is able to interpolate and extrapolate much more accurately, is easier to analyze and interpret, and requires significantly fewer training examples. Two alternative state and parameter estimation strategies, extended Kalman filtering and NLP optimization, are also considered. When no a priori known model of the unobserved process parameters is available, the hybrid network model gives better estimates of the parameters, when compared to these methods. By providing a model of these unmeasured parameters, the hybrid network can also make predictions and hence can be used for process optimization. These results apply both when full and partial state measurements are available, but in the latter case a state reconstruction method must be used for the first principles component of the hybrid model.

753 citations


Book
01 Jun 1992
TL;DR: Weightless neural tools - toward cognitive macrostructures, L. Julesz toward hierarchical matched filtering, R. Hecht-Nielsen some variations on training of recurrent networks, G.J. Kuhn and N.P. Stark.
Abstract: Weightless neural tools - toward cognitive macrostructures, L. Aleksander an estimation theoretic basis for the design of sorting and classification network, R.W. Brockett a self organizing ARTMAP neural architecture for supervized learning and pattern recognition, G.A. Carpenter et al hybrid neural network architectures - equilibrium systems that pay attention, L.N. Cooper neural networks for internal representation of movements in primates and robots, R. Eckmiller et al recognition and segmentation of characters in handwriting with selective attention, K. Fukushima et al adaptive acquisition of language, A.L. Gorin et al what connectionist models learn - learning and representation in connectionist networks, S.J. Hanson and D.J. Burr early vision, focal attention and neural nets, B. Julesz toward hierarchical matched filtering, R. Hecht-Nielsen some variations on training of recurrent networks, G.M. Kuhn and N.P. Herzberg generalized perception networks with nonlinear discriminant functions, S.Y. Kung et al neural tree networks, A. Sankar and R. Mammone capabilities and training of feedforward nets, E.D. Sontag a fast learning algorithm for multilayer neural network based on projection methods, S.J. Yeh and H. Stark.

89 citations


Proceedings ArticleDOI
23 Mar 1992
TL;DR: A novel keyword-spotting system that combines both neural network and dynamic programming techniques is presented, which makes use of the strengths of time delay neural networks (TDNNs), which include strong generalization ability, potential for parallel implementations, robustness to noise, and time shift invariant learning.
Abstract: A novel keyword-spotting system that combines both neural network and dynamic programming techniques is presented. This system makes use of the strengths of time delay neural networks (TDNNs), which include strong generalization ability, potential for parallel implementations, robustness to noise, and time shift invariant learning. Dynamic programming models are used by this system because they have the useful capability of time warping input speech patterns. This system was trained and tested on the Stonehenge Road Rally database, which is a 20-keyword-vocabulary, speaker-independent, continuous-speech corpus. Currently, this system performs at a figure of merit (FOM) rate of 82.5%. FOM is the detection rate averaged from 0 to 10 false alarms per keyword hour. This measure is explained in detail. >

35 citations


Journal ArticleDOI
TL;DR: The edge preserving restoration of piecewise smooth images is formulated in terms of a probabilistic approach, and a MAP estimate algorithm is proposed which could be implemented on a hybrid neural network.

33 citations


Proceedings Article
01 Jan 1992
TL;DR: A hybrid multilayer perceptron (MLP)/hidde arkov model (HMM) speaker-independent continuous-speech recogni-b tion system, in which the advantages of both approaches are combined using MLPs to estimate the state-dependent observation probabilities of an HMM.
Abstract: n M In this paper we present a hybrid multilayer perceptron (MLP)/hidde arkov model (HMM) speaker-independent continuous-speech recogni-b tion system, in which the advantages of both approaches are combined y using MLPs to estimate the state-dependent observation probabilities p of an HMM. New MLP architectures and training procedures are resented which allow the modeling of multiple distributions for phonetic a p classes and context-dependent phonetic classes. Comparisons with ure HMM system illustrate advantages of the hybrid approach both in terms of recognition accuracy and number of parameters required.

32 citations



Proceedings ArticleDOI
07 Jun 1992
TL;DR: An artificial neural network designed to recognize seismic patterns is presented, a hybrid model that is based on competitive learning from Kohonen, self-organization learning from Fukushima, and the delta rule.
Abstract: An artificial neural network designed to recognize seismic patterns is presented. It is a hybrid model because it consists of both unsupervised and supervised learning. The unsupervised layer plays the feature extracting role, and the supervised layer is responsible for class decision. When learning is completed, the user presents a seismic pattern to this model to obtain a decision on to which class the input pattern belongs. If the model fails to recognize a pattern, that means there are no nodes located in the output layer that produce a large enough response. Then, the model will automatically decrease its vigilance threshold to become more tolerant. This automatic-tolerance-adjusted mechanism is demonstrated on some examples, such as recognizing patterns in translation, scaling, noise, or deformation. The concepts are based on competitive learning from Kohonen, self-organization learning from Fukushima, and the delta rule. >

10 citations


Journal ArticleDOI
TL;DR: This work uses the nonlinear classification capabilities of neural networks for structure determination and a nonlinear identification algorithm to identify continuous-time linear models, expressed as differential equations or Laplace transforms.

9 citations


Proceedings ArticleDOI
18 Oct 1992
TL;DR: The authors explore many of the current issues involved in adaptive artificial neural network (ANN) controllers, exploring basic neural network controller designs described in the literature, new approaches involving the combination of ANN techniques with linguistic based approaches, and a framework for comparing adaptiveificial neural network controllers with other adaptive controllers using benchmark examples.
Abstract: The authors explore many of the current issues involved in adaptive artificial neural network (ANN) controllers. The major issues covered include: exploring basic neural network controller designs described in the literature, new approaches involving the combination of ANN techniques with linguistic based approaches, describing sources of input and output data for parameter estimation, and a framework for comparing adaptive artificial neural network controllers with other adaptive controllers using benchmark examples. In addition, hybrid neural network/fuzzy controllers are described. >

9 citations



Proceedings Article
01 Jan 1992
TL;DR: This work uses a very detailed biologically motivated input representation of the speech tokens-Lyon's cochlear model as implemented by Slaney 20 to produce results comparable to those obtained by others without the addition of time normaliza-tion.
Abstract: We report results on vowel and stop consonant recognition with tokens extracted from the TIMIT database. Our current system diiers from others doing similar tasks in that we do not use any speciic time normalization techniques. We use a very detailed biologically motivated input representation of the speech tokens-Lyon's cochlear model as implemented by Slaney 20]. This detailed, high dimensional representation, known as a cochleagram, is classi-ed by either a back-propagation or by a hybrid super-vised/unsupervised neural network classiier. The hybrid network is composed of a biologically motivated unsuper-vised network and a supervised back-propagation network. This approach produces results comparable to those obtained by others without the addition of time normaliza-tion.

Journal ArticleDOI
TL;DR: Hardware and algorithms that use optical processing at various stages for image processing and pattern recognition are described and some of the optical processing concepts used in the networks are presented.
Abstract: Hybrid neural network hardware and several algorithms that use optical processing at various stages for image processing and pattern recognition are described. The implementations of the algorithms in associative processors, optimization neural networks, symbolic correlator neural networks, production system neural networks, and adaptive neural networks are discussed. Some of the optical processing concepts used in the networks are presented. >

Proceedings ArticleDOI
25 May 1992
TL;DR: The hybrid network architecture, its learning process and the improved learning algorithm are presented in this paper and the applied neural network models are the improved ART1 and the feedforward types.
Abstract: Data compression and generalization capability are important characteristics of a neural network model. From this point of view, the two-value image data compression and recovering of a hybrid neural network are examined experimentally. The applied neural network models are the improved ART1 and the feedforward types. The hybrid network architecture, its learning process and the improved learning algorithm are presented in this paper. The whole work has been finished using a large scale general-purpose neural network simulating system, the GKD-N/sup 2/S/sup 2/ on the SUN3 workstation. Some experimental results also have been given and are discussed. >

Proceedings ArticleDOI
26 Oct 1992
TL;DR: A hybrid MTR system composed of ANN and KB classifiers and decision makers, and conventionalsignal processing and probabilistic target track- ing algorithms is developed.
Abstract: In this paper, we present a hybrid artificial neural network (ANN)/knowledge base (KB) system for multi-target recognition (MTR). Specifically, we develop a hybrid MTR archi- tecture composed of ANNand KB classifiers and decision makers, and conventionalsignal processing and probabilistic target track- ing algorithms. Our approach centerson the use of both the on-line classification and parallel processing of neural networks and the formal knowledge and reasoning of domain experts.