scispace - formally typeset
Open AccessJournal ArticleDOI

A learning rule of neural networks via simultaneous perturbation and its hardware implementation

Yutaka Maeda, +2 more
- 01 Feb 1995 - 
- Vol. 8, Iss: 2, pp 251-259
TLDR
A learning rule of neural networks via a simultaneous perturbation and an analog feedforward neural network circuit using the learning rule, which requires only forward operations of the neural network and is suitable for hardware implementation.
About
This article is published in Neural Networks.The article was published on 1995-02-01 and is currently open access. It has received 106 citations till now. The article focuses on the topics: Learning rule & Competitive learning.

read more

Citations
More filters

Fpga implementation of bidirectional associative memory us i n g si m u ltan eo us p ertu r bat io n

TL;DR: A recursive learning scheme for BAM is proposed and its hardware implementation is described and the learning scheme is applicable to analogue BAM as well.
Journal ArticleDOI

Holographic phase modulation enhances performance of optical switches

TL;DR: In this paper, the authors proposed a 5 5 WSS with the same optics as a 1 N WSS, with the use of holographic phase modulation, which can route multiple input signals by integrating the broadcast and select (B&S) function that can connect any input port with any output port.
Proceedings ArticleDOI

Neurocontroller for unknown systems using simultaneous perturbation

TL;DR: The control scheme described here does not require information about the plant Jacobian, because the simultaneous perturbation method estimates the gradient using only values of the error defined by output of the plant and its desired one.
References
More filters
Journal ArticleDOI

Multivariate stochastic approximation using a simultaneous perturbation gradient approximation

TL;DR: The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures that can be significantly more efficient than the standard algorithms in large-dimensional problems.
Journal ArticleDOI

Accelerating the convergence of the back-propagation method

TL;DR: The back-propagation algorithm described by Rumelhart et al. (1986) can greatly accelerate convergence as discussed by the authors, however, in many applications, the number of iterations required before convergence can be large.
Book

Analog VLSI implementation of neural systems

TL;DR: A Neural Processor for Maze Solving and Issues in Analog VLSI and MOS Techniques for Neural Computing are discussed.
Journal ArticleDOI

Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks

TL;DR: It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations and is suitable for multilayer recurrent networks as well.
Proceedings Article

A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization

TL;DR: A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology based on the model-free distributed learning mechanism of Dembo and Kailath and supported by a modified parameter update rule.
Related Papers (5)