scispace - formally typeset
Search or ask a question

Showing papers in "Neural Networks in 1992"


Journal ArticleDOI
TL;DR: The conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate.

5,834 citations


Journal ArticleDOI
TL;DR: The Stochastic Gradient Ascent neural network is proposed and shown to be closely related to the Generalized Hebbian Algorithm (GHA), and the SGA behaves better for extracting the less dominant eigenvectors.

857 citations


Journal ArticleDOI
TL;DR: From a direct proof of the universal approximation capabilities of perceptron type networks with two hidden layers, estimates of numbers of hidden units are derived based on properties of the function being approximation and the accuracy of its approximation.

685 citations


Journal ArticleDOI
Mohamad Musavi1, W. Ahmed1, K. H. Chan1, K. B. Faris1, D.M. Hummels1 
TL;DR: An approach for the implementation of the Radial Basis Function technique is presented and applied to a network of the appropriate architecture and solutions are proposed in view of making RBF a more efficient method for interpolation and classification purposes.

600 citations


Journal ArticleDOI
TL;DR: The results show that the neural network approach to multivariate time-series analysis is a leading contender with the statistical modeling approaches.

489 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions of their inputs and show that it is NP-complete to decide whether there exist weights and thresholds for this network so that it produces output consistent with a given set of training examples.

479 citations


Journal ArticleDOI
TL;DR: It is suggested that the cortical chaos may serve for dynamically linking true memory as well as a memory search in nonequilibrium neural networks.

398 citations


Journal ArticleDOI
TL;DR: The modified back-propagation method consists of a simple change in the total error-of-performance function that is to be minimized by the algorithm, and the final approach to the desired response function is accelerated by an amount that can be predicted analytically.

398 citations


Journal ArticleDOI
TL;DR: It is shown that a net can be trained so that the map and its derivatives are learned, and least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together.

309 citations


Journal ArticleDOI
TL;DR: It is shown that, for feedforward nets with a single hidden layer, a single output node, and a ''transfer function'' Tanh s, the net is uniquely determined by its input-output map, up to an obvious finite group of symmetries.

301 citations


Journal ArticleDOI
TL;DR: A linear neural unit with a modified anti-Hebbian learning rule is shown to be able to optimally fit curves, surfaces, and hypersurfaces by adaptively extracting the minor component of the input data set.

Journal ArticleDOI
TL;DR: The gamma neural model as mentioned in this paper is a neural network architecture for processing temporal patterns, where only current signal values are presented to the neural net, which adapts its own internal memory to store the past.

Journal ArticleDOI
TL;DR: A new algorithm, based on ideas of back propagation learning, is proposed for source separation, and the algorithm can deal even with nonlinear mixtures.

Journal ArticleDOI
TL;DR: An objective function formulation of the Bienenstock, Cooper, and Munro (BCM) theory of visual cortical plasticity is presented that permits the connection between the unsupervised BCM learning procedure and various statistical methods, in particular, that of Projection Pursuit.

Journal ArticleDOI
TL;DR: In this paper some new stability conditions are derived by using a novel Lyapunov function that provide milder constraints on the connection weights than the conventional results.

Journal ArticleDOI
TL;DR: An explicit formula of approximation is proposed which is noise resistant and can be easily modified with the patterns and applied to approach a function defined implicitly, which is useful in control theory.

Journal ArticleDOI
TL;DR: This paper applies optimal filtering techniques to train feedforward networks in the standard supervised learning framework, and presents three algorithms which are computationally more expensive than standard back propagation, but local at the neuron level.

Journal ArticleDOI
TL;DR: Simulations were performed of physiological interactions among excitatory and inhibitory neurons in anatomically realistic local-circuit architectures modeled after hippocampal field CA1, revealing specific physiological characteristics of the input to the network that enable it to closely approximate an ideal winner-take-all mechanism.

Journal ArticleDOI
TL;DR: A learning algorithm for neural networks based on genetic algorithms is proposed and a simplified model for a brain with sensory and motor neurons is studied, whose structure is solely determined by an evolutionary process.

Journal ArticleDOI
Yoshifusa Ito1
TL;DR: derived is that any function continuous on [email protected]?^d (the one-point compactification of R^d) can be likewise approximated, under which the uniform approximation can be implemented without scaling of h.

Journal ArticleDOI
TL;DR: A collicular mechanism is shown to provide plausible explanations for conflicting results upon electrical stimulation of rostral versus caudal sites on the colliculus, and can generate reasonable trajectories for both eye and head platforms in the head-free condition.

Journal ArticleDOI
TL;DR: It is implied that local PCA algorithms should always incorporate hierarchical rather than more competitive, symmetric decorrelation, for reasons of superior performance of the algorithms.

Journal ArticleDOI
TL;DR: A general theoretical basis for the construction of mapping neural networks is presented based on the Parzen Window estimator for joint probability density functions and a consistent estimators for continous conditional expectation functions are deduced.

Journal ArticleDOI
TL;DR: A layer of excitatory neurons with small asymmetric excitation connections and strong coupling to a single inhibitory interneuron is considered and if the inhibition is sufficiently slower than excitation the neurons completely synchronize to a global periodic solution.

Journal ArticleDOI
TL;DR: A new methodology based on fuzzy-set-theoretic connectives to achieve information fusion in computer vision systems is introduced in this article, where the proposed scheme may be treated as a neural network in which fuzzy aggregation functions are used as activation functions.

Journal ArticleDOI
TL;DR: A new methodology for faster supervised temporal learning in nonlinear neural networks is presented, in which the adjoint equations are solved simultaneously with the activation dynamics of the neural network, and how teacher forcing can be modulated in time as learning proceeds.

Journal ArticleDOI
TL;DR: It is concluded that the incorporation of psychologically and biologically plausible structural and functional characteristics, like modularity, unsupervised (competitive) learning, and a novelty dependent learning rate, may contribute to solving some of the problems often encountered in connectionist modeling.

Journal ArticleDOI
TL;DR: The sensitive dependence on initial conditions, which is shown to exist even for very small networks, sets a limit to any long term prediction concerning the evolution of the neural system, unless the network adjust its parameters through plasticity in order to avoid chaotic regimes.

Journal ArticleDOI
TL;DR: This work considers some neural networks which have interesting oscillatory dynamics and analyzes stability and bifurcation properties and considers the possibility of constructing VLSI circuits with clocking techniques to implement such neural networks having prescribed fixed-points or periodic orbits.

Journal ArticleDOI
David G. Stork1, James D Allen1
TL;DR: A three-layer feedforward network is constructed which can solve the N-bit parity problem employing just two hidden units and the implications of employing problem constraints into transfer functions for general pattern classification problems are discussed.