Showing papers in "Neural Networks in 2001"
••
TL;DR: It is argued that Rank Order Coding is not only very efficient, but also easy to implement in biological hardware: neurons can be made sensitive to the order of activation of their inputs by including a feed-forward shunting inhibition mechanism that progressively desensitizes the neuronal population during a wave of afferent activity.
776 citations
••
TL;DR: This paper introduces the use of least squares support vector machines (LS-SVM's) for the optimal control of nonlinear systems including examples on swinging up an inverted pendulum with local stabilization at the endpoint and a tracking problem for a ball and beam system.
516 citations
••
TL;DR: It can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously.
473 citations
••
TL;DR: The resonate-and-fire model is used to illustrate possible sensitivity of biological neurons to the fine temporal structure of the input spike train, and is computationally efficient and suitable for simulations of large networks of spiking neurons.
429 citations
••
TL;DR: These mesh-free procedures for solving linear differential equations (ODEs and elliptic PDEs) based on multiquadric (MQ) radial basis function networks (RBFNs) are presented and the IRBFN method is more accurate than the DRBFN one and experience so far shows that beta can be chosen in the range 7 < or = beta 10 for the former.
329 citations
••
TL;DR: In the most thoroughly analyzed regression problem, the best models were those with less restrictive priors, which emphasizes the major advantage of the Bayesian approach, that the authors are not forced to guess attributes that are unknown, such as the number of degrees of freedom in the model.
327 citations
••
TL;DR: Sufficient conditions ensuringglobal exponential stability of delayed Hopfield neural networks are given, investigating their global exponential stability.
299 citations
••
TL;DR: A Real-Coded Genetic Algorithm is presented that uses the appropriate operators for this encoding type to train Recurrent Neural Networks and is compared with the Real-Time Recurrent Learning algorithm to perform the fuzzy grammatical inference.
234 citations
••
TL;DR: The basic idea is the concept of utility of a codeword, a powerful instrument to overcome one of the main drawbacks of clustering algorithms: generally, the results achieved are not good in the case of a bad choice of the initial codebook.
232 citations
••
TL;DR: An electronic circuit modelling the spike generation process in the biological neuron is presented, capable of simulating the spiking behaviour of several different types of biological neurons and small so that many neurons can be implemented on a single silicon chip.
158 citations
••
TL;DR: It is shown, by using a learning rule involving spike timing dependant plasticity, that neuronal maps in the output layer can be trained to recognize natural photographs of faces and was also remarkably resistant to image noise and reductions in contrast.
••
TL;DR: A scheme for implementing highly-connected, reconfigurable networks of integrate-and-fire neurons in VLSI employing probabilistic transmission of spikes to implement continuous-valued synaptic weights, and memory-based look-up tables to implement arbitrary interconnection topologies.
••
TL;DR: A Modified General Regression Neural Network is presented as an easy-to-use 'black box'-tool to feed in available data and obtain a reasonable regression surface and solves common practical problems of common feed-forward networks.
••
TL;DR: In this article, a programmable multi-chip VLSI neuronal system for exploring spike-based information processing models is described, which consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture.
••
TL;DR: This work investigates the formation of a Hebbian cell assembly of spiking neurons, using a temporal synaptic learning curve that includes potentiation for short time delays between pre- and post-synaptic neuronal spiking, and depression for spiking events occurring in the reverse order.
••
TL;DR: Under the generalization of dropping the Lipschitzian hypotheses for output functions, some stability criteria are obtained by using the Liapunov functional method, which can be used to analyze the dynamics of biological neural systems or to design globally stable artificial neural networks.
••
TL;DR: This work proposes an extension to the known Cell Structures, growing Radial Basis Function-like networks, that enables them to learn their number of nodes needed to solve a current task and to dynamically adapt the learning rate of each node separately.
••
TL;DR: A learning control system for the phylogenetically oldest behaviors of oculomotor control, the stabilization reflexes of gaze, derived from the biologically inspired principle of feedback-error learning combined with a state-of-the-art non-parametric statistical learning network is developed.
••
TL;DR: The experimental results show that the CNNDA can produce compact ANNs with good generalization ability and short training time in comparison with other algorithms, including cancer, diabetes and character-recognition problems in ANNs.
••
TL;DR: A new method to stably calculate optimal trajectories based on the minimum commanded torque change criterion is proposed, which can obtain trajectories satisfying Euler-Poisson equations with a sufficiently high accuracy.
••
TL;DR: Novel algorithms to learn the amplitudes of nonlinear activations in layered networks, without any assumption on their analytical form are introduced and it is shown that the algorithms speed up convergence and modify the search path in the weight space, possibly reaching deeper minima that may also improve generalization.
••
TL;DR: This paper presents an extended survey of connectionist inference systems, with particular reference to how they perform variable binding and rule-based reasoning and whether they involve distributed or localist representations.
••
TL;DR: Contraction theory may help guide functional modeling of the central nervous system, and conversely it provides a systematic method to build arbitrarily complex robots out of simpler elements.
••
TL;DR: Results from network simulations confirm and extend earlier predictions based on single neuron properties and a deterministic state-space analysis on stable propagation of synchronous spiking in cortical neural networks.
••
TL;DR: An algorithm is established to calculate the Bayesian stochastic complexity based on blowing-up technology in algebraic geometry and it is proved that theBayesian generalization error of a hierarchical learning machine is smaller than that of a regular statistical model, even if the true distribution is not contained in the parametric model.
••
TL;DR: It is found that the IF and HH models respond to correlated inputs in totally opposite ways, but the IF-FHN model shows similar behaviour to the HH model, which could serve as a better model for spiking neuron computation.
••
TL;DR: The Fokker-Planck method is applied to the so-called 'synfire chain' network model and it is shown how a synchronous population spike (pulse packet) evolves to a narrow pulse packet or fades away, depending on its initial size and width.
••
TL;DR: Fuzzy AR TMAP was combined with a bank of Kalman filters to group pulses transmitted from different emitters based on their position-specific parameters, and with a module to accumulate evidence from fuzzy ARTMAP responses corresponding to the track defined for each emitter.
••
TL;DR: This paper gives rigorous stability analysis of a number of algorithms for principal and minor component extraction, obtaining a unified insight view on the dynamical behaviors of various algorithms.
••
TL;DR: The design methodology of a gray-box model is described, and the importance of the choice of the discretization scheme used for transforming the differential equations of the knowledge-based model into a set of discrete-time recurrent equations is emphasized.