scispace - formally typeset
Search or ask a question

Showing papers in "Neural Networks in 1989"


Journal ArticleDOI
TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.

18,794 citations



Journal ArticleDOI
TL;DR: It is proved that any continuous mapping can be approximately realized by Rumelhart-Hinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions.

3,989 citations


Journal ArticleDOI
TL;DR: An optimality principle is proposed which is based upon preserving maximal information in the output units and an algorithm for unsupervised learning based upon a Hebbian learning rule, which achieves the desired optimality is presented.

1,554 citations


Journal ArticleDOI
TL;DR: The main result is a complete description of the landscape attached to E in terms of principal component analysis, showing that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns.

1,456 citations


Journal ArticleDOI
TL;DR: The activation dynamics of nets are considered from a rigorous mathematical point of view and an extension of the Cohen-Grossberg convergence theorem is proved for certain nets with nonsymmetric weight matrices.

601 citations


Journal ArticleDOI
TL;DR: The results on speed-up by drugs that increase brain concentrations of dopamine and acetylcholine support a 1972 prediction that the gated dipole habituative transmitter is a catecholamine and its long-term memory trace transmitter is acetylCholine.

383 citations


Journal ArticleDOI
TL;DR: This review outlines some fundamental neural network modules for associative memory, pattern recognition, and category learning andAdaptive filter formalism provides a unified notation.

339 citations


Journal ArticleDOI
TL;DR: It is shown that both arm kinematics and arm dynamics can be learned, if a suitable representation for the map output is used, due to the topology-conserving property of the map spatially neighboring neurons can learn cooperatively.

317 citations


Journal ArticleDOI
N. Baba1
TL;DR: The random optimization method of Matyas and its modified algorithm are used to learn the weights and parameters in a neural network in order to find the global minimum of error function of neural networks.

265 citations


Journal ArticleDOI
TL;DR: It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights.

Journal ArticleDOI
TL;DR: It is proved that the sparsely encoded associative memory has a large basin of attraction around each memorized pattern, when and only when an activity control mechanism is attached to it.

Journal ArticleDOI
TL;DR: Neural networks are presented which simulate some behavioral effects of frontal lobe damage and incorporate neural design principles developed for other purposes by Grossberg and his co-workers, including adaptive resonance between two layers of sensory processing.

Journal ArticleDOI
TL;DR: The process of feature extraction by an S-cell is analyzed mathematically in this paper, and the role of the C-cells in deformation-invariant pattern recognition is discussed.

Journal ArticleDOI
TL;DR: A neural network model of temporal pattern memory in animal motor systems is proposed that receives an external oscillatory input with some desired wave form, then, after sufficient learning, the network autonomously oscillates in the previously given wave form.

Journal ArticleDOI
TL;DR: In this article, the mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks and compared with back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification.

Journal ArticleDOI
TL;DR: The sensitivity of the network is such that small variations in the input do not affect the output and this results in an improvement in the recognition rate of characters with slight variations in structure, linearity, and orientation.

Journal ArticleDOI
TL;DR: A new measure for the performance of hidden units as well as output units is proposed, called conditional class entropy, which not only allows existing networks to be judged but is also the basis of a new training algorithm with which an optimum number of neurons with optimum connecting weights can be found.

Journal ArticleDOI
TL;DR: The authors present a summary of several psychophysical results that illustrate the complex interrelationship between stimulus factors influencing Phi and the organization of the resulting motion percepts, and a neural network model whose mechanisms are capable of explaining these percepts.

Journal ArticleDOI
TL;DR: The description allows us to predict which neurons will become active by using a simple algorithm that does not require numerical integration of the network's differential equations, and can be efficiently emulated on a VLSI bit array processor for moderately sized applications.

Journal ArticleDOI
TL;DR: The results of the spreading activation process can be used to facilitate 2D object learning and recognition from silhouettes by generating representations from bottom-up fixation cues which are invariant to translation, orientation, and scale.

Journal ArticleDOI
TL;DR: The COR T-X filter as discussed by the authors uses nonlinear interactions between multiple spatial scales to resolve a design trade-off that exists between the properties of boundary localization, boundary completion, and noise suppression.

Journal ArticleDOI
TL;DR: The results indicate that analogous self-similar multiple-scale neural networks may be used to carry out data fusion of many other types of spatially organized data structures.

Journal ArticleDOI
TL;DR: The proposed dynamic heteroassociative memory employs the newly-developed Ho-Kashyap associative memory recording algorithm which optimally distributes the association process of each neural layer over individual neuron weighted-sum and activation function faculties.

Journal ArticleDOI
TL;DR: An electronic neural network is designed, built, and tested which replicates many features exhibited by the olfactory bulb, and the nonlinear dynamics of the network are motivated by experimental findings in EEG recordings, indicating that a massively parallel architecture can be used to best describe the bulb's dynamics.

Journal ArticleDOI
TL;DR: The analysis of the coding parameters shows that the proposed algorithm works faster than conventional approaches, provided that the patterns are extremely sparse, and the retrieval time is independent of the number of stored items.

Journal ArticleDOI
TL;DR: It is shown that clocked Boltzmann machines are not much more powerful than combinational circuits built from gates which compute Boolean threshold functions and their negations.

Journal ArticleDOI
TL;DR: A parallel implementation of the optimum (or maximum likelihood Gaussian) classifier that uses a cellular automaton to very rapidly find the output vector with minimum Euclidean distance from the input vector is presented.

Journal ArticleDOI
TL;DR: In this article, two fundamentally different graph architectures underlying the neural networks were tested: one based on arcs, the other on nodes, and their performance measurements underscore the necessity to make connections the basic unit of representation.

Journal ArticleDOI
TL;DR: Heuristic extension to make the neural net processor more neuromorphic by introducing nonlinearity is discussed and digital reconstructions with this extension are shown; these reflect noticeable improvement in image quality.