scispace - formally typeset
Journal ArticleDOI

An analog VLSI recurrent neural network learning a continuous-time trajectory

Reads0
Chats0
TLDR
This work presents an alternative implementation in analog VLSI, which employs a stochastic perturbation algorithm to observe the gradient of the error index directly on the network in random directions of the parameter space, thereby avoiding the tedious task of deriving the gradient from an explicit model of the network dynamics.
Abstract
Real-time algorithms for gradient descent supervised learning in recurrent dynamical neural networks fail to support scalable VLSI implementation, due to their complexity which grows sharply with the network dimension. We present an alternative implementation in analog VLSI, which employs a stochastic perturbation algorithm to observe the gradient of the error index directly on the network in random directions of the parameter space, thereby avoiding the tedious task of deriving the gradient from an explicit model of the network dynamics. The network contains six fully recurrent neurons with continuous-time dynamics, providing 42 free parameters which comprise connection strengths and thresholds. The chip implementing the network includes local provisions supporting both the learning and storage of the parameters, integrated in a scalable architecture which can be readily expanded for applications of learning recurrent dynamical networks requiring larger dimensionality. We describe and characterize the functional elements comprising the implemented recurrent network and integrated learning system, and include experimental results obtained from training the network to represent a quadrature-phase oscillator.

read more

Citations
More filters
Posted Content

A Survey of Neuromorphic Computing and Neural Networks in Hardware.

TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Posted Content

Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences

TL;DR: This work introduces the Phased LSTM model, which extends the L STM unit by adding a new time gate, controlled by a parametrized oscillation with a frequency range which require updates of the memory cell only during a small percentage of the cycle.
Proceedings ArticleDOI

Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences

TL;DR: The Phased LSTM model as mentioned in this paper extends the LSTMs by adding a new time gate, which is controlled by a parametrized oscillation with a frequency range which require updates of the memory cell only during a small percentage of the cycle.
Journal ArticleDOI

Adaptive optics based on analog parallel stochastic optimization: analysis and experimental demonstration

TL;DR: In this paper, a very large-scale integration (VLSI) system implementing a simultaneous perturbation stochastic approximation optimization algorithm was applied for real-time adaptive control of multielement wave-front correctors.
Proceedings ArticleDOI

Hardware spiking neurons design: Analog or digital?

TL;DR: This paper compares the digital and analog implementations of the same Leaky Integrate-and-Fire neuron model at the same technology node with the same level of performance and energy, and shows that the analog implementation requires 5 times less area, and consumes 20 times less energy than the digital design.
References
More filters
Journal ArticleDOI

A Stochastic Approximation Method

TL;DR: In this article, a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability is presented.
Journal ArticleDOI

Identification and control of dynamical systems using neural networks

TL;DR: It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems and the models introduced are practically feasible.
Journal ArticleDOI

A learning algorithm for continually running fully recurrent neural networks

TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI

Multivariate stochastic approximation using a simultaneous perturbation gradient approximation

TL;DR: The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures that can be significantly more efficient than the standard algorithms in large-dimensional problems.
Journal ArticleDOI

CMOS analog integrated circuits based on weak inversion operations

TL;DR: In this paper, a simple model describing the DC behavior of MOS transistors operating in weak inversion is derived on the basis of previous publications and verified experimentally for both p-and n-channel test transistors of a Si-gate low-voltage CMOS technology.
Related Papers (5)