scispace - formally typeset
Journal ArticleDOI

Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks

Reads0
Chats0
TLDR
This article elucidates step-by-step the problems typically encountered when training SNNs and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting as well as introducing surrogate gradient methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.
Abstract
Spiking neural networks (SNNs) are nature's versatile solution to fault-tolerant, energy-efficient signal processing. To translate these benefits into hardware, a growing number of neuromorphic spiking NN processors have attempted to emulate biological NNs. These developments have created an imminent need for methods and tools that enable such systems to solve real-world signal processing problems. Like conventional NNs, SNNs can be trained on real, domain-specific data; however, their training requires the overcoming of a number of challenges linked to their binary and dynamical nature. This article elucidates step-by-step the problems typically encountered when training SNNs and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting. Accordingly, it gives an overview of existing approaches and provides an introduction to surrogate gradient (SG) methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.

read more

Citations
More filters
Journal ArticleDOI

A solution to the learning dilemma for recurrent networks of spiking neurons

TL;DR: This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning and suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.
Journal ArticleDOI

Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)

TL;DR: Recently, Deep Continuous Local Learning (DECOLLE) as mentioned in this paper has been proposed to learn deep spatio-temporal representations from spikes relying solely on local information using synthetic gradients.
Journal ArticleDOI

Opportunities for neuromorphic computing algorithms and applications

TL;DR: A review of recent results in neuromorphic computing algorithms and applications can be found in this article , where the authors highlight characteristics of neuromorphic Computing technologies that make them attractive for the future of computing and discuss opportunities for future development of algorithms and application on these systems.
Journal ArticleDOI

Memristors - from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing.

TL;DR: In this paper, the case for memristors as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks is discussed.
Journal ArticleDOI

Deep learning incorporating biologically inspired neural dynamics and in-memory computing

TL;DR: The biologically inspired dynamics of spiking neurons are incorporated into conventional recurrent neural network units and in-memory computing, and it is shown how this allows for accurate and energy-efficient deep learning.
References
More filters
Journal ArticleDOI

A learning algorithm for continually running fully recurrent neural networks

TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI

A learning algorithm for boltzmann machines

TL;DR: A general parallel search method is described, based on statistical mechanics, and it is shown how it leads to a general learning rule for modifying the connection strengths so as to incorporate knowledge about a task domain in an efficient way.
Posted Content

Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1

TL;DR: A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Posted Content

Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation

TL;DR: This work considers a small-scale version of {\em conditional computation}, where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network.
Book

Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition

TL;DR: This textbook for advanced undergraduate and beginning graduate students provides a thorough and up-to-date introduction to the fields of computational and theoretical neuroscience.
Related Papers (5)