Deep learning in spiking neural networks
Amirhossein Tavanaei,Masoud Ghodrati,Saeed Reza Kheradpisheh,Timothée Masquelier,Anthony S. Maida +4 more
TLDR
The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNN's typically require many fewer operations and are the better candidates to process spatio-temporal data.About:
This article is published in Neural Networks.The article was published on 2019-03-01 and is currently open access. It has received 756 citations till now. The article focuses on the topics: Spiking neural network & Artificial neural network.read more
Citations
More filters
Posted ContentDOI
Massively Parallel FPGA Hardware for Spike-By-Spike Networks
David Rotermund,Klaus Pawelzik +1 more
TL;DR: This paper develops and investigates a framework as well as these computational SbS cores for a network on chip that realizes a compromise between machine learning approaches and biologically realistic models, and demonstrates the feasibility of the design on a Xilinx Virtex 6 FPGA.
Journal ArticleDOI
Low-Power Vertical Tunnel Field-Effect Transistor Ternary Inverter
Hyun Woo Kim,Daewoong Kwon +1 more
TL;DR: In this article, a vertical tunnel FET-based ternary CMOS (T-CMOS) is introduced and its electrical characteristics are investigated using TCAD device and mixed-mode simulations with experimentally calibrated tunneling parameters.
Posted Content
FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks.
TL;DR: FSpiNN is an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy, by reducing the computational requirements of neuronal and STDP operations, improving the accuracy of STDP-based learning, compressing the SNN through a fixed-point quantization, and incorporating the memory and energy requirements in the optimization process.
Journal ArticleDOI
Towards an Interpretable Autoencoder: A Decision-Tree-Based Autoencoder and its Application in Anomaly Detection
TL;DR: In this paper , the authors proposed the first interpretable autoencoder based on decision trees, which is designed to handle categorical data without the need to transform the data representation.
Proceedings ArticleDOI
Robustness to Noisy Synaptic Weights in Spiking Neural Networks
TL;DR: It is found that SNNs are more robust to Gaussian noise in synaptic weights than artificial neural networks (ANNs) under some conditions, which implies the possibility of using high-performance cutting-edge materials with intrinsic noise as an information storage medium in SNNS.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Related Papers (5)
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
Michael Davies,Narayan Srinivasa,Tsung-Han Lin,Gautham N. Chinya,Cao Yongqiang,Sri Harsha Choday,Georgios D. Dimou,Prasad Joshi,Nabil Imam,Shweta Jain,Yuyun Liao,Chit-Kwan Lin,Andrew Lines,Ruokun Liu,Deepak A. Mathaikutty,Steven McCoy,Arnab Paul,Jonathan Tse,Guruguhanathan Venkataramanan,Yi-Hsin Weng,Andreas Wild,Yoon Seok Yang,Hong Wang +22 more
Training Deep Spiking Neural Networks Using Backpropagation.
A million spiking-neuron integrated circuit with a scalable communication network and interface
Paul A. Merolla,John V. Arthur,Rodrigo Alvarez-Icaza,Andrew S. Cassidy,Jun Sawada,Filipp Akopyan,Bryan L. Jackson,Nabil Imam,Chen Guo,Yutaka Nakamura,Bernard Brezzo,Ivan Vo,Steven K. Esser,Rathinakumar Appuswamy,Brian Taba,Arnon Amir,Myron D. Flickner,William P. Risk,Rajit Manohar,Dharmendra S. Modha +19 more