Deep learning in spiking neural networks
Amirhossein Tavanaei,Masoud Ghodrati,Saeed Reza Kheradpisheh,Timothée Masquelier,Anthony S. Maida +4 more
Reads0
Chats0
TLDR
The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNN's typically require many fewer operations and are the better candidates to process spatio-temporal data.About:
This article is published in Neural Networks.The article was published on 2019-03-01 and is currently open access. It has received 756 citations till now. The article focuses on the topics: Spiking neural network & Artificial neural network.read more
Citations
More filters
Journal ArticleDOI
SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks
TL;DR: In this paper , the authors proposed spike-based implicit differentiation on the equilibrium state (SPIDE) for supervised learning with purely spikebased computation, which demonstrates the potential for energy-efficient training of SNNs.
Posted Content
DenseHMM: Learning Hidden Markov Models by Learning Dense Representations.
TL;DR: It is shown that the non-linearity of the kernelization is crucial for the expressiveness of the representations and two optimization schemes are proposed that make use of this: a modification of the Baum-Welch algorithm and a direct co-occurrence optimization.
Journal ArticleDOI
Voltage Gated Domain Wall Magnetic Tunnel Junction-based Spiking Convolutional Neural Network
TL;DR: In this article , a spin-orbit torque (SOT) driven and voltage-gated domain wall motion (DWM)-based MTJ device and its application in neuromorphic computing was proposed.
Journal ArticleDOI
AI-Managed Cognitive Radio Digitizers
TL;DR: An overview of circuits and systems techniques for AI-managed analog/digital interfaces with application in SDR/CR mobile telecom systems and some design trends and challenges are discussed, going from new communications and computing paradigms for AIoT devices and networks, to digital-based/scaling-friendly analog circuit techniques for an efficient digitization.
Journal ArticleDOI
A Co-Designed Neuromorphic Chip With Compact (17.9K F2) and Weak Neuron Number-Dependent Neuron/Synapse Modules
TL;DR: This work proposes a co-designed neuromorphic core (SRCcore) based on the quantized spiking neural network (SNN) technology and compact chip design methodology and shows that quantized SNNs achieve 0.05%∼2.19% higher accuracy than previous works, thus supporting the design and application of SRCcore.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Related Papers (5)
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
Michael Davies,Narayan Srinivasa,Tsung-Han Lin,Gautham N. Chinya,Cao Yongqiang,Sri Harsha Choday,Georgios D. Dimou,Prasad Joshi,Nabil Imam,Shweta Jain,Yuyun Liao,Chit-Kwan Lin,Andrew Lines,Ruokun Liu,Deepak A. Mathaikutty,Steven McCoy,Arnab Paul,Jonathan Tse,Guruguhanathan Venkataramanan,Yi-Hsin Weng,Andreas Wild,Yoon Seok Yang,Hong Wang +22 more
Training Deep Spiking Neural Networks Using Backpropagation.
A million spiking-neuron integrated circuit with a scalable communication network and interface
Paul A. Merolla,John V. Arthur,Rodrigo Alvarez-Icaza,Andrew S. Cassidy,Jun Sawada,Filipp Akopyan,Bryan L. Jackson,Nabil Imam,Chen Guo,Yutaka Nakamura,Bernard Brezzo,Ivan Vo,Steven K. Esser,Rathinakumar Appuswamy,Brian Taba,Arnon Amir,Myron D. Flickner,William P. Risk,Rajit Manohar,Dharmendra S. Modha +19 more