scispace - formally typeset
Open AccessJournal ArticleDOI

A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking Neurons

TLDR
This work proposes a novel highly effective and robust membrane potential-driven supervised learning (MemPo-Learn) method, which enables the trained neurons to generate desired spike trains with higher precision, higher efficiency, and better noise robustness than the current state-of-the-art spiking neuron learning methods.
Abstract
Spiking neurons are becoming increasingly popular owing to their biological plausibility and promising computational properties. Unlike traditional rate-based neural models, spiking neurons encode information in the temporal patterns of the transmitted spike trains, which makes them more suitable for processing spatiotemporal information. One of the fundamental computations of spiking neurons is to transform streams of input spike trains into precisely timed firing activity. However, the existing learning methods, used to realize such computation, often result in relatively low accuracy performance and poor robustness to noise. In order to address these limitations, we propose a novel highly effective and robust membrane potential-driven supervised learning (MemPo-Learn) method, which enables the trained neurons to generate desired spike trains with higher precision, higher efficiency, and better noise robustness than the current state-of-the-art spiking neuron learning methods. While the traditional spike-driven learning methods use an error function based on the difference between the actual and desired output spike trains, the proposed MemPo-Learn method employs an error function based on the difference between the output neuron membrane potential and its firing threshold. The efficiency of the proposed learning method is further improved through the introduction of an adaptive strategy, called skip scan training strategy, that selectively identifies the time steps when to apply weight adjustment. The proposed strategy enables the MemPo-Learn method to effectively and efficiently learn the desired output spike train even when much smaller time steps are used. In addition, the learning rule of MemPo-Learn is improved further to help mitigate the impact of the input noise on the timing accuracy and reliability of the neuron firing dynamics. The proposed learning method is thoroughly evaluated on synthetic data and is further demonstrated on real-world classification tasks. Experimental results show that the proposed method can achieve high learning accuracy with a significant improvement in learning time and better robustness to different types of noise.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Heterogeneous Domain Adaptation Through Progressive Alignment

TL;DR: A novel HDA method that can optimize both feature discrepancy and distribution divergence in a unified objective function is proposed, which first learns a new transferable feature space by dictionary-sharing coding, and then aligns the distribution gaps on the new space.
Journal ArticleDOI

Supervised learning in spiking neural networks: A review of algorithms and evaluations

TL;DR: This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively, and provides five qualitative performance evaluation criteria and presents a new taxonomy for supervisedLearning algorithms depending on these five performance evaluated criteria.
Journal ArticleDOI

BP-STDP: Approximating backpropagation using spike timing dependent plasticity

TL;DR: This paper proposes a novel supervised learning approach based on an event-based spike-timing-dependent plasticity (STDP) rule embedded in a network of integrate-and-fire (IF) neurons, which enjoys benefits of both accurate gradient descent and temporally local, efficient STDP.
Posted Content

BP-STDP: Approximating Backpropagation using Spike Timing Dependent Plasticity

TL;DR: In this article, an event-based spike-timing-dependent plasticity (STDP) rule embedded in a network of integrate-and-fire (IF) neurons is proposed.
References
More filters
Journal ArticleDOI

A quantitative description of membrane current and its application to conduction and excitation in nerve

TL;DR: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre by putting them into mathematical form and showing that they will account for conduction and excitation in quantitative terms.
Journal ArticleDOI

Simple model of spiking neurons

TL;DR: A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons and combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons.
Book

Spiking Neuron Models: Single Neurons, Populations, Plasticity

TL;DR: A comparison of single and two-dimensional neuron models for spiking neuron models and models of Synaptic Plasticity shows that the former are superior to the latter, while the latter are better suited to population models.
Book

Spikes: Exploring the Neural Code

TL;DR: Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory about the representation of sensory signals in neural spike trains and a quantitative framework is used to pose precise questions about the structure of the neural code.
Related Papers (5)