scispace - formally typeset
Search or ask a question

Showing papers in "Neural Networks in 1993"


Journal ArticleDOI
TL;DR: Experiments show that SCG is considerably faster than BP, CGL, and BFGS, and avoids a time consuming line search.

3,882 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that most of the characterizations that were reported thus far in the literature are special cases of the following general result: a standard multilayer feedforward network with a locally bounded piecewise continuous activation function can approximate any continuous function to any degree of accuracy if and only if the network's activation function is not a polynomial.

1,581 citations


Journal ArticleDOI
TL;DR: It is proved that any finite time trajectory of a given n-dimensional dynamical system can be approximately realized by the internal state of the output units of a continuous time recurrent neural network with n output units, some hidden units, and an appropriate initial condition.

961 citations


Journal ArticleDOI
TL;DR: It is shown that standard feedforward networks with as few as a single hidden layer can uniformly approximate continuous functions on compacta provided that the activation function @j is locally Riemann integrable and nonpolynomial.

529 citations


Journal ArticleDOI
TL;DR: A novel neural dynamics which greatly enhances the ability of associative neural networks is presented, and most of the problems of the conventional model are overcome by the improved dynamics.

364 citations


Journal ArticleDOI
Lei Xu1
TL;DR: It is proved that for one layer with n"1 linear units, the LMSER rule lets their weights converge to rotations of the data's first n"2 principal components, corresponding to the global minimum in the Mean Square Error (MSE) landscape, which has many saddles but no local minimum.

338 citations


Journal ArticleDOI
TL;DR: New learning schemes using feedback-error-learning for a neural network model applied to adaptive nonlinear feedback control are presented and it is shown that learning impedance control is derived when one proposed scheme is used in Cartesian space.

290 citations


Journal ArticleDOI
TL;DR: It is shown that the modification ought to allow a Kohonen network to map sequences of inputs without having to resort to external time delay mechanisms.

272 citations


Journal ArticleDOI
TL;DR: This paper shows the general superiority of the ''extended'' nonconvergent methods compared to classical penalty term methods, simple stopped training, and methods which only vary the number of hidden units.

246 citations


Journal ArticleDOI
TL;DR: It is argued that the Self-Organizing Map (SOM) may be implemented in biological neural networks, the cells of which communicate by transsynaptic signals as well as diffuse chemical substances, which has been found very effective in many information-processing applications.

189 citations


Journal ArticleDOI
TL;DR: This work proposes a new neural network model for trajectory formation based on the minimum torque-change criterion, which uses a forward dynamics model, an inverse dynamicsmodel, and a trajectory formation mechanism, which generates an approximate minimum Torque-change trajectory.

Journal ArticleDOI
TL;DR: Simulation results are presented, showing that initializing back propagation networks with prototypes generally results in drastic reductions in training time, improved robustness against local minima, and better generalization.

Journal ArticleDOI
TL;DR: This paper shows how adaptation of the steepness of the sigmoids during learning treats these two topics in a common framework by gradient descent and shows an improvement of the mean convergence rate with respect to the standard back propagation and good optimization performance.

Journal ArticleDOI
TL;DR: The probability of premature saturation at the beginning epoch of learning procedure in the BP algorithm has been derived in terms of the maximum value of initial weights, the number of nodes in each layer, and the maximum slope of the sigmoidal activation function; it has been verified by the Monte Carlo simulation.

Journal ArticleDOI
TL;DR: A piecewise linear model of the nonmonotonic neuron is used and the existence and stability of equilibrium states of the recalling process are investigated and two kinds of theoretical estimates of the absolute capacity are derived.

Journal ArticleDOI
TL;DR: It is shown by numerical investigations that a complicated memory search task is executable using complex dynamics in a recurrent neural network model with asymmetric synaptic connection.

Journal ArticleDOI
TL;DR: An algorithm for the training of multilayered neural networks solely based on linear algebraic methods is presented and its convergence speed up to a certain limit of learning accuracy is orders of magnitude better than that of the classical back propagation.

Journal ArticleDOI
TL;DR: The weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements and are shown to be also essentially determined from the external measurements.

Journal ArticleDOI
TL;DR: A universal asymptotic behavior of learning curves for general noiseless dichotomy machines, or neural networks is proved, it is proved that irrespective of the architecture of a machine, the average predictive entropy or the information gain converges to 0.

Journal ArticleDOI
TL;DR: A polynomial time algorithm for the construction and training of a class of multilayer perceptrons for classification that uses linear programming models to incrementally generate the hidden layer in a restricted higher-order perceptron.

Journal ArticleDOI
TL;DR: This article considers the representational ability of feedforward networks (FFNs) in terms of the fan-in required by the hidden units of a network, and proves that a higher-order network (HON) is at least as powerful as any other FFN architecture when the order of the networks are the same.

Journal ArticleDOI
TL;DR: A continuous-time neural theory is presented that proposes that the nonmonotonic behavior of retinal neurons serves as a simple form of memory that adaptively filters visual signals to guide attentional mechanisms and that the transient activity is essential in achieving sharp dynamic percepts while preserving a good sensitivity to light.

Journal ArticleDOI
TL;DR: An extension of Oja's learning algorithm for principal component analysis which is capable of adapting the weights of a higher order neuron to pick up higher order correlations from a given data set is presented.

Journal ArticleDOI
TL;DR: A linearized analysis of the Hopfield network's dynamics forms the main theory of the paper, followed by a series of experiments in which some problem mappings are investigated in the context of these dynamics.

Journal ArticleDOI
TL;DR: For recognition and control of multiple manipulated objects, two learning schemes for neuralnetwork controllers based on feedback-error-learning and modular architecture are presented.


Journal ArticleDOI
TL;DR: The paper describes changes in the structure (anatomy) of a multilevel neural network which is trained by back-propagation algorithm and the mechanisms of cell propagation and degeneration of inactive synapses and inactive cells are considered.

Journal ArticleDOI
TL;DR: This network with interneurons suggests that negative feedback may be used in a perceptual system to optimise forward transmission of information.

Journal ArticleDOI
TL;DR: Simulation results show that the network is able to invert the plant's behaviour and characteristics, thus learning to control the plant accurately and accelerated by local adaptation of the learning rate.

Journal ArticleDOI
Masahiko Arai1
TL;DR: It is shown that for the former I - 1 hidden units are necessary and sufficient for I learning patterns and for the latter about I/3hidden units are sufficient, which means that the necessary number of hidden units can be reduced by taking into account the features of learning pattern distributions.