Open AccessProceedings Article
A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization
Gert Cauwenberghs
- Vol. 5, pp 244-251
TLDR
A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology based on the model-free distributed learning mechanism of Dembo and Kailath and supported by a modified parameter update rule.Abstract:
A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks.read more
Citations
More filters
Journal ArticleDOI
Deep learning in neural networks
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Journal ArticleDOI
Fast exact multiplication by the Hessian
TL;DR: This work derives a technique that directly calculates Hv, where v is an arbitrary vector, and shows that this technique can be used at the heart of many iterative techniques for computing various properties of H, obviating any need to calculate the full Hessian.
Journal ArticleDOI
Learning in spiking neural networks by reinforcement of stochastic synaptic transmission.
TL;DR: The hypothesis that the randomness of synaptic transmission is harnessed by the brain for learning, in analogy to the way that genetic mutation is utilized by Darwinian evolution is considered.
Journal ArticleDOI
Stochastic parallel-gradient-descent technique for high-resolution wave-front phase-distortion correction
TL;DR: In this article, a stochastic parallel-gradient descent (SGP) based adaptive wave-front correction algorithm is proposed for high-resolution adaptive wavefront correction, and a performance criterion for parallel-perturbation-based algorithms is introduced and applied to optimize adaptive system architecture.
Journal ArticleDOI
Adaptive optics based on analog parallel stochastic optimization: analysis and experimental demonstration
TL;DR: In this paper, a very large-scale integration (VLSI) system implementing a simultaneous perturbation stochastic approximation optimization algorithm was applied for real-time adaptive control of multielement wave-front correctors.
References
More filters
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
A learning algorithm for continually running fully recurrent neural networks
Ronald J. Williams,David Zipser +1 more
TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI
30 years of adaptive neural networks: perceptron, Madaline, and backpropagation
Bernard Widrow,Michael A. Lehr +1 more
TL;DR: The history, origination, operating characteristics, and basic theory of several supervised neural-network training algorithms (including the perceptron rule, the least-mean-square algorithm, three Madaline rules, and the backpropagation technique) are described.
Journal ArticleDOI
Learning state space trajectories in recurrent neural networks
TL;DR: A procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network, which seems particularly suited for temporally continuous domains.
Related Papers (5)
Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks
Marwan A. Jabri,Barry Flower +1 more