scispace - formally typeset
Proceedings ArticleDOI

Training Spiking ConvNets by STDP and Gradient Descent

TLDR
The network architecture is the only high performance, spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.
Abstract
This paper proposes a new method for training multi-layer spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly use backpropagation method for non-spiking networks is not easily applied. Our method uses a novel version of layered spike-timing- dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for spiking neural networks (SNNs). The training process uses two components for unsupervised feature extraction and supervised classification. The first component is a new STDP rule for spike-based representation learning which trains convolutional filters. The second introduces a new STDP-based supervised learning rule for spike pattern classification via an approximation to gradient descent. Stacking these components implements a novel spiking CNN of integrate-and-fire (IF) neurons with performances comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for the MNIST handwritten digit classification. Our network architecture is the only high performance, spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.

read more

Citations
More filters
Journal ArticleDOI

Deep learning in spiking neural networks

TL;DR: The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNN's typically require many fewer operations and are the better candidates to process spatio-temporal data.
Proceedings ArticleDOI

RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network

TL;DR: It is found that performance degradation in the converted SNN stems from using "hard reset" spiking neuron that is driven to fixed reset potential once its membrane potential exceeds the firing threshold, leading to information loss during SNN inference.
Book ChapterDOI

Deep Spiking Neural Network: Energy Efficiency Through Time Based Coding

TL;DR: This work proposes an ANN to SNN conversion methodology that uses a time-based coding scheme, named Temporal-SwitchCoding (TSC), and a corresponding TSC spiking neuron model that surpasses the best inference accuracy of the converted rate-encoded SNN with 7-14.5× lesser inference latency.
Journal ArticleDOI

Locally connected spiking neural networks for unsupervised feature learning.

TL;DR: A method for learning image features with locally connected layers in SNNs using a spike-timing-dependent plasticity (STDP) rule, which has the advantage of fast convergence to a dataset representation, and they require fewer learnable parameters than other SNN approaches with unsupervised learning.
Journal ArticleDOI

Fast and energy-efficient neuromorphic deep learning with first-spike times

TL;DR: In this paper, the authors describe a rigorous derivation of a learning rule for such first-spike times in networks of leaky integrate-and-fire neurons, relying solely on input and output spike times, and show how this mechanism can implement error back propagation in hierarchical spiking networks.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Book

Pattern Recognition and Machine Learning

TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Journal ArticleDOI

Pattern Recognition and Machine Learning

Radford M. Neal
- 01 Aug 2007 - 
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Related Papers (5)