A learning rule of neural networks via simultaneous perturbation and its hardware implementation
TLDR
A learning rule of neural networks via a simultaneous perturbation and an analog feedforward neural network circuit using the learning rule, which requires only forward operations of the neural network and is suitable for hardware implementation.About:
This article is published in Neural Networks.The article was published on 1995-02-01 and is currently open access. It has received 106 citations till now. The article focuses on the topics: Learning rule & Competitive learning.read more
Citations
More filters
Journal ArticleDOI
Correction to "Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation"
Payman Sadegh,James C. Spall +1 more
TL;DR: The numbering of references in the article noted by the title (ibid., vol. 43, pp. 1480-1484, Oct. 1998) is incorrect.
Journal Article
An Analog Neural Network System with Learning Capability Using Simultaneous Perturbation
Yutaka Maeda,Toshiyuki Kusuhashi +1 more
TL;DR: An implementation of analog neural network system with on-line learning capability with simultaneous perturbation is described, and a learning rule using the simultaneous perturbedation is adopted.
Dissertation
Algorithm/Architecture Co-Design for Low-Power Neuromorphic Computing
TL;DR: This document summarizes current capabilities, research and operational priorities, and plans for further studies that were established at the 2015 USGS workshop on quantitative hazard assessments of earthquake-triggered landsliding and liquefaction in the Czech Republic.
Proceedings ArticleDOI
SPSA for noisy non-stationary blind source separation
Tariq S. Durrani,G. Morison +1 more
TL;DR: A novel application of the simultaneous perturbation stochastic approximation algorithm (SPSA) to the noisy non-stationary blind source separation problem is presented and the proposed approach demonstrates the algorithm with a second order cost function suitable for applications to non- stationary data.
Proceedings ArticleDOI
Optical multi-cast multi-pole multi-throw switch using holography and its application to 2-degree ROADM Node
Keita Yamaguchi,Mitsumasa Nakajima,Kenya Suzuki,Mikitaka Itoh,Toshikazu Hashimoto,Yuichiro Ikuma,Joji Yamaguchi +6 more
TL;DR: An optical multi-cast multi-pole multi-throw switch that routes multiple optical signals with a single knob control with a simple optical configuration is proposed.
References
More filters
Journal ArticleDOI
Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
TL;DR: The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures that can be significantly more efficient than the standard algorithms in large-dimensional problems.
Journal ArticleDOI
Accelerating the convergence of the back-propagation method
TL;DR: The back-propagation algorithm described by Rumelhart et al. (1986) can greatly accelerate convergence as discussed by the authors, however, in many applications, the number of iterations required before convergence can be large.
Book
Analog VLSI implementation of neural systems
Carver A. Mead,Mohammed Ismail +1 more
TL;DR: A Neural Processor for Maze Solving and Issues in Analog VLSI and MOS Techniques for Neural Computing are discussed.
Journal ArticleDOI
Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks
Marwan A. Jabri,Barry Flower +1 more
TL;DR: It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations and is suitable for multilayer recurrent networks as well.
Proceedings Article
A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization
TL;DR: A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology based on the model-free distributed learning mechanism of Dembo and Kailath and supported by a modified parameter update rule.