Deep learning in spiking neural networks
Amirhossein Tavanaei,Masoud Ghodrati,Saeed Reza Kheradpisheh,Timothée Masquelier,Anthony S. Maida +4 more
TLDR
The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNN's typically require many fewer operations and are the better candidates to process spatio-temporal data.About:
This article is published in Neural Networks.The article was published on 2019-03-01 and is currently open access. It has received 756 citations till now. The article focuses on the topics: Spiking neural network & Artificial neural network.read more
Citations
More filters
Proceedings ArticleDOI
Event-Based Angular Velocity Regression with Spiking Networks
TL;DR: In this article, the angular velocity prediction of a rotating event-camera with an SNN is investigated. But the authors focus on the prediction of angular velocities continuously in time from irregular, asynchronous event-based input.
Journal ArticleDOI
Classifying Melanoma Skin Lesions Using Convolutional Spiking Neural Networks With Unsupervised STDP Learning Rule
TL;DR: This paper makes malignant melanoma and benign melanocytic nevi skin lesions classification using convolutional SNNs with unsupervised spike-timing-dependent plasticity (STDP) learning rule and proposes to use feature selection to select more diagnostic features to improve the classification performance of the networks.
Proceedings ArticleDOI
Deep Spiking Neural Network with Spike Count based Learning Rule
TL;DR: A novel spike-based learning rule for rate-coded deep SNNs, whereby the spike count of each neuron is used as a surrogate for gradient backpropagation is introduced, which allows direct deployment to the neuromorphic hardware and supports efficient inference.
Journal ArticleDOI
On the accuracy and computational cost of spiking neuron implementation.
TL;DR: A refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters is introduced and the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy.
Posted Content
Training spiking multi-layer networks with surrogate gradients on an analog neuromorphic substrate
Benjamin Cramer,Sebastian Billaudelle,Simeon Kanya,Aron Leibfried,Andreas Grübl,Vitali Karasenko,Christian Pehle,Korbinian Schreiber,Yannik Stradmann,Johannes Weis,Johannes Schemmel,Friedemann Zenke +11 more
TL;DR: This work developed a hardware-in-the-loop strategy to train multi-layer spiking networks using surrogate gradients on the analog BrainScales-2 chip and demonstrates low-energy spiking network processing on an analog neuromorphic substrate and sets several new benchmarks for hardware systems in terms of classification accuracy, processing speed, and efficiency.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Related Papers (5)
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
Michael Davies,Narayan Srinivasa,Tsung-Han Lin,Gautham N. Chinya,Cao Yongqiang,Sri Harsha Choday,Georgios D. Dimou,Prasad Joshi,Nabil Imam,Shweta Jain,Yuyun Liao,Chit-Kwan Lin,Andrew Lines,Ruokun Liu,Deepak A. Mathaikutty,Steven McCoy,Arnab Paul,Jonathan Tse,Guruguhanathan Venkataramanan,Yi-Hsin Weng,Andreas Wild,Yoon Seok Yang,Hong Wang +22 more
Training Deep Spiking Neural Networks Using Backpropagation.
A million spiking-neuron integrated circuit with a scalable communication network and interface
Paul A. Merolla,John V. Arthur,Rodrigo Alvarez-Icaza,Andrew S. Cassidy,Jun Sawada,Filipp Akopyan,Bryan L. Jackson,Nabil Imam,Chen Guo,Yutaka Nakamura,Bernard Brezzo,Ivan Vo,Steven K. Esser,Rathinakumar Appuswamy,Brian Taba,Arnon Amir,Myron D. Flickner,William P. Risk,Rajit Manohar,Dharmendra S. Modha +19 more