Journal ArticleDOI
Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks
Reads0
Chats0
TLDR
This article elucidates step-by-step the problems typically encountered when training SNNs and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting as well as introducing surrogate gradient methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.Abstract:
Spiking neural networks (SNNs) are nature's versatile solution to fault-tolerant, energy-efficient signal processing. To translate these benefits into hardware, a growing number of neuromorphic spiking NN processors have attempted to emulate biological NNs. These developments have created an imminent need for methods and tools that enable such systems to solve real-world signal processing problems. Like conventional NNs, SNNs can be trained on real, domain-specific data; however, their training requires the overcoming of a number of challenges linked to their binary and dynamical nature. This article elucidates step-by-step the problems typically encountered when training SNNs and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting. Accordingly, it gives an overview of existing approaches and provides an introduction to surrogate gradient (SG) methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.read more
Citations
More filters
Journal ArticleDOI
A solution to the learning dilemma for recurrent networks of spiking neurons
Guillaume Bellec,Franz Scherr,Anand Subramoney,Elias Hajek,Darjan Salaj,Robert Legenstein,Wolfgang Maass +6 more
TL;DR: This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning and suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.
Journal ArticleDOI
Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)
TL;DR: Recently, Deep Continuous Local Learning (DECOLLE) as mentioned in this paper has been proposed to learn deep spatio-temporal representations from spikes relying solely on local information using synthetic gradients.
Journal ArticleDOI
Opportunities for neuromorphic computing algorithms and applications
Catherine D. Schuman,Shruti R. Kulkarni,Maryam Parsa,J. Parker Mitchell,Prasanna Date,Bill Kay +5 more
TL;DR: A review of recent results in neuromorphic computing algorithms and applications can be found in this article , where the authors highlight characteristics of neuromorphic Computing technologies that make them attractive for the future of computing and discuss opportunities for future development of algorithms and application on these systems.
Journal ArticleDOI
Memristors - from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing.
Adnan Mehonic,Abu Sebastian,Bipin Rajendran,Osvaldo Simeone,Eleni Vasilaki,Anthony J. Kenyon +5 more
TL;DR: In this paper, the case for memristors as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks is discussed.
Journal ArticleDOI
Deep learning incorporating biologically inspired neural dynamics and in-memory computing
Stanisław Woźniak,Angeliki Pantazi,Thomas Bohnstingl,Thomas Bohnstingl,Evangelos Eleftheriou +4 more
TL;DR: The biologically inspired dynamics of spiking neurons are incorporated into conventional recurrent neural network units and in-memory computing, and it is shown how this allows for accurate and energy-efficient deep learning.
References
More filters
Journal ArticleDOI
A learning algorithm for continually running fully recurrent neural networks
Ronald J. Williams,David Zipser +1 more
TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI
A learning algorithm for boltzmann machines
TL;DR: A general parallel search method is described, based on statistical mechanics, and it is shown how it leads to a general learning rule for modifying the connection strengths so as to incorporate knowledge about a task domain in an efficient way.
Posted Content
Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
TL;DR: A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Posted Content
Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
TL;DR: This work considers a small-scale version of {\em conditional computation}, where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network.
Book
Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition
TL;DR: This textbook for advanced undergraduate and beginning graduate students provides a thorough and up-to-date introduction to the fields of computational and theoretical neuroscience.
Related Papers (5)
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
Michael Davies,Narayan Srinivasa,Tsung-Han Lin,Gautham N. Chinya,Cao Yongqiang,Sri Harsha Choday,Georgios D. Dimou,Prasad Joshi,Nabil Imam,Shweta Jain,Yuyun Liao,Chit-Kwan Lin,Andrew Lines,Ruokun Liu,Deepak A. Mathaikutty,Steven McCoy,Arnab Paul,Jonathan Tse,Guruguhanathan Venkataramanan,Yi-Hsin Weng,Andreas Wild,Yoon Seok Yang,Hong Wang +22 more
Training Deep Spiking Neural Networks Using Backpropagation.
A million spiking-neuron integrated circuit with a scalable communication network and interface
Paul A. Merolla,John V. Arthur,Rodrigo Alvarez-Icaza,Andrew S. Cassidy,Jun Sawada,Filipp Akopyan,Bryan L. Jackson,Nabil Imam,Chen Guo,Yutaka Nakamura,Bernard Brezzo,Ivan Vo,Steven K. Esser,Rathinakumar Appuswamy,Brian Taba,Arnon Amir,Myron D. Flickner,William P. Risk,Rajit Manohar,Dharmendra S. Modha +19 more