Deep learning in spiking neural networks
Amirhossein Tavanaei,Masoud Ghodrati,Saeed Reza Kheradpisheh,Timothée Masquelier,Anthony S. Maida +4 more
TLDR
The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNN's typically require many fewer operations and are the better candidates to process spatio-temporal data.About:
This article is published in Neural Networks.The article was published on 2019-03-01 and is currently open access. It has received 756 citations till now. The article focuses on the topics: Spiking neural network & Artificial neural network.read more
Citations
More filters
Posted Content
Spiking Neural Networks with Single-Spike Temporal-Coded Neurons for Network Intrusion Detection.
Shibo Zhou,Xiaohua Li +1 more
TL;DR: It is shown that SNNs built with nonleaky neurons can have a less-complex and less-nonlinear input-output response and can have superior performance, which is demonstrated by experimenting with the SNN's over two popular network intrusion detection datasets.
Journal ArticleDOI
Evolving spiking neural network model for PM2.5 hourly concentration prediction based on seasonal differences: A case study on data from Beijing and Shanghai
TL;DR: Wang et al. as mentioned in this paper developed a staging evolving spiking neural network (eSNN) model named Staging eSNN that first employs a time series clustering algorithm to distinguish the seasonal from the diurnal variation in the PM2.5 concentration, then predict the concentrations in Beijing and Shanghai 1, 3, 6, 12 and 24 hours in advance.
Journal ArticleDOI
A new recursive least squares-based learning algorithm for spiking neurons
TL;DR: Wang et al. as mentioned in this paper proposed a recursive least squares-based learning rule (RLSBLR) for SNNs to generate the desired spatio-temporal spike train.
Proceedings ArticleDOI
Minimizing Inference Time: Optimization Methods for Converted Deep Spiking Neural Networks
TL;DR: In this article, the authors evaluate two inference optimization algorithms and propose an additional method for error minimization to improve the simulation time of spiking neural networks, which can speed up the inference process by a factor of ten.
Book ChapterDOI
Biologically Plausible Learning of Text Representation with Spiking Neural Networks
TL;DR: In this article, a biologically plausible mechanism for generating low-dimensional spike-based text representation was proposed, which can be used for text/document classification and achieved an accuracy of 80.19% on the bydate version of the 20 newsgroups data set.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Related Papers (5)
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
Michael Davies,Narayan Srinivasa,Tsung-Han Lin,Gautham N. Chinya,Cao Yongqiang,Sri Harsha Choday,Georgios D. Dimou,Prasad Joshi,Nabil Imam,Shweta Jain,Yuyun Liao,Chit-Kwan Lin,Andrew Lines,Ruokun Liu,Deepak A. Mathaikutty,Steven McCoy,Arnab Paul,Jonathan Tse,Guruguhanathan Venkataramanan,Yi-Hsin Weng,Andreas Wild,Yoon Seok Yang,Hong Wang +22 more
Training Deep Spiking Neural Networks Using Backpropagation.
A million spiking-neuron integrated circuit with a scalable communication network and interface
Paul A. Merolla,John V. Arthur,Rodrigo Alvarez-Icaza,Andrew S. Cassidy,Jun Sawada,Filipp Akopyan,Bryan L. Jackson,Nabil Imam,Chen Guo,Yutaka Nakamura,Bernard Brezzo,Ivan Vo,Steven K. Esser,Rathinakumar Appuswamy,Brian Taba,Arnon Amir,Myron D. Flickner,William P. Risk,Rajit Manohar,Dharmendra S. Modha +19 more