Supervised Learning With First-to-Spike Decoding in Multilayer Spiking Neural Networks.
Brian Gardner,André Grüning +1 more
TLDR
In this paper, the authors propose a supervised learning method that can train multilayer spiking neural networks to solve classification problems based on a rapid, first-to-spike decoding strategy.Abstract:
Experimental studies support the notion of spike-based neuronal information processing in the brain, with neural circuits exhibiting a wide range of temporally-based coding strategies to rapidly and efficiently represent sensory stimuli. Accordingly, it would be desirable to apply spike-based computation to tackling real-world challenges, and in particular transferring such theory to neuromorphic systems for low-power embedded applications. Motivated by this, we propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems based on a rapid, first-to-spike decoding strategy. The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer. In addition to this, we also explore several distinct, spike-based encoding strategies in order to form compact representations of presented input data. We demonstrate the classification performance of the learning rule as applied to several benchmark datasets, including MNIST. The learning rule is capable of generalising from the data, and is successful even when used with constrained network architectures containing few input and hidden layer neurons. Furthermore, we highlight a novel encoding strategy, termed 'scanline encoding', that can transform image data into compact spatiotemporal patterns for subsequent network processing. Designing constrained, but optimised, network structures and performing input dimensionality reduction has strong implications for neuromorphic applications.read more
Citations
More filters
Journal ArticleDOI
Online spike-based recognition of digits with ultrafast microlaser neurons
TL;DR: In this article , numerical simulations of different algorithms that utilize ultrafast photonic spiking neurons as receptive fields to allow for image recognition without an offline computing step are presented, and the merits of event, spike-time and rank-order based algorithms adapted to this system.
Posted Content
Linear Constraints Learning for Spiking Neurons.
Huy Nguyen,Dominique Chu +1 more
TL;DR: In this paper, a new supervised learning algorithm is proposed to train spiking neural networks for classification, which overcomes a limitation of existing multi-spike learning methods: it solves the problem of interference between interacting output spikes during a learning trial.
References
More filters
Journal ArticleDOI
Supervised learning in multilayer spiking neural networks
Ioana Sporea,André Grüning +1 more
TL;DR: A supervised learning algorithm for multilayer spiking neural networks that can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.
Journal ArticleDOI
Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons
TL;DR: In simulations, this model can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance, and the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
Journal ArticleDOI
Learning Precisely Timed Spikes
Raoul-Martin Memmesheimer,Raoul-Martin Memmesheimer,Ran Rubin,Ran Rubin,Bence P. Ölveczky,Haim Sompolinsky,Haim Sompolinsky +6 more
TL;DR: A theory to characterize the capacity of feedforward networks to generate desired spike sequences and presents a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequence.
Journal ArticleDOI
Matching Recall and Storage in Sequence Learning with Spiking Neural Networks
TL;DR: A generic learning rule is derived that is matched to the neural dynamics by minimizing an upper bound on the Kullback–Leibler divergence from the target distribution to the model distribution and is consistent with spike-timing dependent plasticity.
Journal ArticleDOI
BP-STDP: Approximating backpropagation using spike timing dependent plasticity
TL;DR: This paper proposes a novel supervised learning approach based on an event-based spike-timing-dependent plasticity (STDP) rule embedded in a network of integrate-and-fire (IF) neurons, which enjoys benefits of both accurate gradient descent and temporally local, efficient STDP.