scispace - formally typeset
Open AccessJournal ArticleDOI

Hebbian learning and spiking neurons

TLDR
A correlation-based ~‘‘Hebbian’’ ! learning rule at a spike level with millisecond resolution is formulated, mathematically analyzed, and compared with learning in a firing-rate description.
Abstract
A correlation-based ~‘‘Hebbian’’ ! learning rule at a spike level with millisecond resolution is formulated, mathematically analyzed, and compared with learning in a firing-rate description. The relative timing of presynaptic and postsynaptic spikes influences synaptic weights via an asymmetric ‘‘learning window.’’ A differential equation for the learning dynamics is derived under the assumption that the time scales of learning and neuronal spike dynamics can be separated. The differential equation is solved for a Poissonian neuron model with stochastic spike arrival. It is shown that correlations between input and output spikes tend to stabilize structure formation. With an appropriate choice of parameters, learning leads to an intrinsic normalization of the average weight and the output firing rate. Noise generates diffusion-like spreading of synaptic weights. @S1063-651X~99!02804-4#

read more

Content maybe subject to copyright    Report

Hebbian learning and spiking neurons
Richard Kempter
*
Physik Department, Technische Universita
¨
tMu
¨
nchen, D-85747 Garching bei Mu
¨
nchen, Germany
Wulfram Gerstner
Swiss Federal Institute of Technology, Center of Neuromimetic Systems, EPFL-DI, CH-1015 Lausanne, Switzerland
J. Leo van Hemmen
Physik Department, Technische Universita
¨
tMu
¨
nchen, D-85747 Garching bei Mu
¨
nchen, Germany
~Received 6 August 1998; revised manuscript received 23 November 1998!
A correlation-based ~‘‘Hebbian’’! learning rule at a spike level with millisecond resolution is formulated,
mathematically analyzed, and compared with learning in a firing-rate description. The relative timing of
presynaptic and postsynaptic spikes influences synaptic weights via an asymmetric ‘‘learning window.’’ A
differential equation for the learning dynamics is derived under the assumption that the time scales of learning
and neuronal spike dynamics can be separated. The differential equation is solved for a Poissonian neuron
model with stochastic spike arrival. It is shown that correlations between input and output spikes tend to
stabilize structure formation. With an appropriate choice of parameters, learning leads to an intrinsic normal-
ization of the average weight and the output firing rate. Noise generates diffusion-like spreading of synaptic
weights. @S1063-651X~99!02804-4#
PACS number~s!: 87.19.La, 87.19.La, 05.65.1b, 87.18.Sn
I. INTRODUCTION
Correlation-based or ‘‘Hebbian’’ learning @1# is thought
to be an important mechanism for the tuning of neuronal
connections during development and thereafter. It has been
shown by various model studies that a learning rule which is
driven by the correlations between presynaptic and postsyn-
aptic neurons leads to an evolution of neuronal receptive
fields @2–9# and topologically organized maps @10–12#.
In all these models, learning is based on the correlation
between neuronal firing rates, that is, a continuous variable
reflecting the mean activity of a neuron. This is a valid de-
scription on a time scale of 100 ms and more. On a time
scale of 1 ms, however, neuronal activity consists of a se-
quence of short electrical pulses, the so-called action poten-
tials or spikes. During recent years experimental and theoret-
ical evidence has accumulated which suggests that temporal
coincidences between spikes on a millisecond or even sub-
millisecond scale play an important role in neuronal infor-
mation processing @13–24#. If so, a rate description may, and
often will, neglect important information that is contained in
the temporal structure of a neuronal spike train.
Neurophysiological experiments also suggest that the
change of a synaptic efficacy depends on the precise timing
of postsynaptic action potentials with respect to presynaptic
input spikes on a time scale of 10 ms. Specifically, a synaptic
weight is found to increase, if presynaptic firing precedes a
postsynaptic spike, and to decrease otherwise @25,26#; see
also @27–33#. Our description of learning at a temporal reso-
lution of spikes takes these effects into account.
In contrast to the standard rate models of Hebbian learn-
ing, we introduce and analyze a learning rule where synaptic
modifications are driven by the temporal correlations be-
tween presynaptic and postsynaptic spikes. First steps to-
wards a detailed modeling of temporal relations have been
taken for rate models in @34# and for spike models in @22,35
43#.
II. DERIVATION OF THE LEARNING EQUATION
A. Specification of the Hebb rule
We consider a neuron that receives input from N@ 1 syn-
apses with efficacies J
i
,1<i<N; cf. Fig. 1. We assume
that changes of J
i
are induced by presynaptic and postsyn-
aptic spikes. The learning rule consists of three parts. ~i! Let
t
i
f
be the arrival time of the fth input spike at synapse i. The
*
Electronic address: Richard.Kempter@physik.tu-muenchen.de
Electronic address: Wulfram.Gerstner@di.epfl.ch
Electronic address: Leo.van.Hemmen@physik.tu-muenchen.de
FIG. 1. Single neuron. We study the development of synaptic
weights J
i
~small filled circles, 1<i<N) of a single neuron ~large
circle!. The neuron receives input spike trains denoted by S
i
in
and
produces output spikes denoted by S
out
.
PHYSICAL REVIEW E APRIL 1999VOLUME 59, NUMBER 4
PRE 59
1063-651X/99/59~4!/4498~17!/$15.00 4498 ©1999 The American Physical Society

arrival of a spike induces the weight J
i
to change by an
amount
h
w
in
which can be either positive or negative. The
quantity
h
. 0 is a ‘‘small’’ parameter. ~ii! Let t
n
be the nth
output spike of the neuron under consideration. This event
triggers the change of all N efficacies by an amount
h
w
out
which can also be positive or negative. ~iii! Finally, time
differences between all pairs of input and output spikes in-
fluences the change of the efficacies. Given a time difference
s5 t
i
f
2 t
n
between input and output spikes, J
i
is changed by
an amount
h
W(s) where the learning window W is a real-
valued function. It is to be specified shortly; cf. also Fig. 6.
Starting at time t with an efficacy J
i
(t), the total change
DJ
i
(t)5 J
i
(t1 T)2 J
i
(t) in a time interval T, which may be
interpreted as the length of a learning trial, is calculated by
summing the contributions of input and output spikes as well
as all pairs of input and output spikes occurring in the time
interval
@
t,t1 T
#
. Denoting the input spike train at synapse i
by a series of
d
functions, S
i
in
(t)5 (
f
d
(t2 t
i
f
), and, simi-
larly, output spikes by S
out
(t)5 (
n
d
(t2 t
n
), we can formu-
late the rules ~i!~iii! explicitly by setting
DJ
i
~
t
!
5
h
E
t
t1T
dt
8
@
w
in
S
i
in
~
t
8
!
1 w
out
S
out
~
t
8
!
#
1
h
E
t
t1T
dt
8
E
t
t1T
dt
9
W
~
t
9
2t
8
!
S
i
in
~
t
9
!
S
out
~
t
8
!
~1a!
5
h
F
(
t
i
f
8
w
in
1
(
t
n
8
w
out
1
(
t
i
f
,t
n
8
W
~
t
i
f
2 t
n
!
G
.
~1b!
In Eq. ~1b! the prime indicates that only firing times t
i
f
and t
n
in the time interval
@
t,t1 T
#
are to be taken into account; cf.
Fig. 2.
Equation ~1! represents a Hebb-type learning rule since
they correlate presynaptic and postsynaptic behavior. More
precisely, here our learning scheme depends on the time se-
quence of input and output spikes. The parameters w
in
,w
out
as well as the amplitude of the learning window W may, and
in general will, depend on the value of the efficacy J
i
. Such
a J
i
dependence is useful so as to avoid unbounded growth
of synaptic weights. Even though we have not emphasized
this in our notation, most of the theory developed below is
valid for J
i
-dependent parameters; cf. Sec. V B.
B. Ensemble average
Given that input spiking is random but partially correlated
and that the generation of spikes is in general a complicated
dynamic process, an analysis of Eq. ~1! is a formidable prob-
lem. We therefore simplify it. We have introduced a small
parameter
h
. 0 into Eq. ~1! with the idea in mind that the
learning process is performed on a much slower time scale
than the neuronal dynamics. Thus we expect that only aver-
aged quantities enter the learning dynamics.
Considering averaged quantities may also be useful in or-
der to disregard the influence of noise. In Eq. ~1! spikes are
discrete events that trigger a discontinuous change of the
synaptic weight; cf. Fig. 2 ~bottom!. If we assume a stochas-
tic spike arrival or if we assume a stochastic process for
generating output spikes, the change DJ
i
is a random vari-
able, which exhibits fluctuations around some mean drift.
Averaging implies that we focus on the drift and calculate
the expected rate of change. Fluctuations are treated in Sec.
VI.
1. Self-averaging of learning
Effective learning needs repetition over many trials of
length T, each individual trial being independent of the pre-
vious ones. Equation ~1! tells us that the results of the indi-
vidual trials are to be summed. According to the ~strong! law
of large numbers @44# in conjunction with
h
being ‘‘small’’
@45# we can average the resulting equation, viz., Eq. ~1!,
regardless of the random process. In other words, the learn-
ing procedure is self-averaging. Instead of averaging over
several trials, we may also consider one single long trial
during which input and output characteristics remain con-
stant. Again, if
h
is sufficiently small, time scales are sepa-
rated and learning is self-averaging.
The corresponding average over the resulting random pro-
cess is denoted by angular brackets
^&
and is called an en-
semble average, in agreement with physical usage. It is a
probability measure on a probability space, which need not
be specified explicitly. We simply refer to the literature @44#.
Substituting s5 t
9
2 t
8
on the right-hand side of Eq. ~1a! and
dividing both sides by T, we obtain
FIG. 2. Hebbian learning and spiking neurons—schematic. In
the bottom graph we plot the time course of the synaptic weight
J
i
(t) evoked through input and output spikes ~upper graphs, vertical
bars!. An output spike, e.g., at time t
1
, induces the weight J
i
to
change by an amount w
out
which is negative here. To consider the
effect of correlations between input and output spikes, we plot the
learning window W(s) ~center graphs! around each output spike,
where s5 0 matches the output spike times ~vertical dashed lines!.
The three input spikes at times t
i
1
, t
i
2
, and t
i
3
~vertical dotted lines!
increase J
i
by an amount w
in
each. There is no influence of corre-
lations between these input spikes and the output spike at time t
1
.
This becomes visible with the aid of the learning window W cen-
tered at t
1
. The input spikes are too far away in time. The next
output spike at t
2
, however, is close enough to the previous input
spike at t
i
3
. The weight J
i
is changed by w
out
, 0 plus the contribu-
tion W(t
i
3
2 t
2
). 0, the sum of which is positive ~arrowheads!.
Similarly, the input spike at time t
i
4
leads to a change w
in
1 W(t
i
4
2 t
2
), 0.
PRE 59
4499HEBBIAN LEARNING AND SPIKING NEURONS

^
DJ
i
&
~
t
!
T
5
h
T
E
t
t1T
dt
8
[w
in
^
S
i
in
&
~
t
8
!
1 w
out
^
S
out
&
~
t
8
!
]
1
h
T
E
t
t1T
dt
8
E
t2t
8
t1T2t
8
dsW
~
s
!
3
^
S
i
in
~
t
8
1 s
!
S
out
~
t
8
!
&
. ~2!
2. Example: Inhomogeneous Poisson process
Averaging the learning equation before proceeding is jus-
tified if both input and output processes will be taken to be
inhomogeneous Poisson processes, which will be assumed
throughout Secs. IVVI. An inhomogeneous Poisson pro-
cess with time-dependent rate function l(t)>0 is character-
ized by two facts: ~i! disjoint intervals are independent and
~ii! the probability of getting a single event at time t in an
interval of length Dt is l(t)Dt, more events having a prob-
ability o(Dt); see also @46#, Appendix A for a simple expo-
sition of the underlying mathematics. The integrals in Eq.
~1a! or the sums in Eq. ~1b! therefore decompose into many
independent events and, thus, the strong law of large num-
bers applies to them. The output is a temporally local process
as well so that the strong law of large numbers also applies
to the output spikes at times t
n
in Eq. ~1!.
If we describe input spikes by inhomogeneous Poisson
processes with intensity l
i
in
(t), then we may identify the
ensemble average over a spike train with the stochastic in-
tensity,
^
S
i
in
&
(t)5 l
i
in
(t); cf. Fig. 3. The intensity l
i
in
(t) can
be interpreted as the instantaneous rate of spike arrival at
synapse i. In contrast to temporally averaged mean firing
rates, the instantaneous rate may vary on a fast time scale in
many biological systems; cf. Sec. III C. The stochastic inten-
sity
^
S
out
&
(t) is the instantaneous rate of observing an output
spike, where
^&
is an ensemble average over both the input
and the output. Finally, the correlation function
^
S
i
in
(t
9
)S
out
(t
8
)
&
is to be interpreted as the joint probability
density for observing an input spike at synapse i at the time
t
9
and an output spike at time t
8
.
C. Separation of time scales
We require the length T of a learning trial in Eq. ~2! to be
much larger than typical interspike intervals. Both many in-
put spikes at any synapse and many output spikes should
occur on average in a time interval of length T. Then, using
the notation
f(t)5 T
2 1
*
t
t1T
dt
8
f(t
8
), we may introduce the
mean firing rates
n
i
in
(t)5
^
S
i
in
&
(t) and
n
out
(t)5
^
S
out
&
(t). We
call
n
i
in
and
n
out
mean firing rates in order to distinguish them
from the previously defined instantaneous rates
^
S
i
in
&
and
^
S
out
&
which are the result of an ensemble average only. Be-
cause of their definition, mean firing rates
n
always vary
slowly as a function of time. That is, they vary on a time
scale of the order of T. The quantities
n
i
in
and
n
out
therefore
carry hardly any information that may be present in the tim-
ing of discrete spikes.
For the sake of further simplification of Eq. ~2!, we define
the width W of the learning window W(s) and consider the
case T@ W. Most of the ‘‘mass’’ of the learning window
should be inside the interval
@
2 W,W
#
. Formally we require
*
2 W
W
ds
u
W(s)
u
@
*
2`
2W
ds
u
W(s)
u
1
*
W
`
ds
u
W(s)
u
. For T@ W
the integration over s in Eq. ~2! can be extended to run from
2 ` to `. With the definition of a temporally averaged cor-
relation function,
C
i
~
s;t
!
5
^
S
i
in
~
t1 s
!
S
out
~
t
!
&
, ~3!
the last term on the right in Eq. ~2! reduces to
*
2 `
`
dsW(s)C
i
(s;t). Correlations between presynaptic and
postsynaptic spikes, thus, enter spike-based Hebbian learning
through C
i
convolved with the window W. We note that the
correlation C
i
(s;t), though being both an ensemble and a
temporally averaged quantity, may change as a function of s
on a much faster time scale than T or the width W of the
learning window. The temporal structure of C
i
depends es-
sentially on the neuron ~model! under consideration. An ex-
ample is given in Sec. IV A.
We require learning to be a slow process; cf. Sec. II B 1.
More specifically, we require that J values do not change
much in the time interval T. Thus T separates the time scale
W ~width of the learning window W) from the time scale of
the learning dynamics, which is proportional to
h
2 1
. Under
those conditions we are allowed to approximate the left-hand
side of Eq. ~2! by the rate of change dJ
i
/dt, whereby we
have omitted the angular brackets for brevity. Absorbing
h
into the learning parameters w
in
, w
out
, and W, we obtain
d
dt
J
i
~
t
!
5w
in
n
i
in
~
t
!
1 w
out
n
out
~
t
!
1
E
2 `
`
dsW
~
s
!
C
i
~
s;t
!
.
~4!
The ensemble-averaged learning equation ~4!, which holds
for any neuron model, will be the starting point of the argu-
ments below.
III. SPIKE-BASED AND RATE-BASED
HEBBIAN LEARNING
In this section we indicate the assumptions that are re-
quired to reduce spike-based to rate-based Hebbian learning
and outline the limitations of the latter.
A. Rate-based Hebbian learning
In neural network theory, the hypothesis of Hebb @1# is
usually formulated as a learning rule where the change of a
synaptic efficacy J
i
depends on the correlation between the
FIG. 3. Inhomogeneous Poisson process. In the upper graph we
have plotted an example of an instantaneous rate l
i
in
(t) in units of
Hz. The average rate is 10 Hz ~dashed line!. The lower graph shows
a spike train S
i
in
(t) which is a realization of an inhomogeneous
Poisson process with rate l
i
in
(t). The spike times are denoted by
vertical bars.
4500 PRE 59
KEMPTER, GERSTNER, AND van HEMMEN

mean firing rate
n
i
in
of the ith presynaptic neuron and the
mean firing rate
n
out
of a postsynaptic neuron, viz.,
dJ
i
dt
[J
˙
i
5a
0
1a
1
n
i
in
1 a
2
n
out
1 a
3
n
i
in
n
out
1 a
4
~
n
i
in
!
2
1 a
5
~
n
out
!
2
, ~5!
where a
0
, 0, a
1
, a
2
, a
3
, a
4
, and a
5
are proportionality
constants. Apart from the decay term a
0
and the ‘‘Hebbian’’
term
n
i
in
n
out
proportional to the product of input and output
rates, there are also synaptic changes which are driven sepa-
rately by the presynaptic and postsynaptic rates. The param-
eters a
0
,...,a
5
may depend on J
i
. Equation ~5! is a general
formulation up to second order in the rates; see, e.g.,
@3,47,12#.
B. Spike-based Hebbian learning
To get Eq. ~5! from the spike-based learning rule in Eq.
~4! two approximations are required. First, if there are no
correlations between input and output spikes apart from the
correlations contained in the instantaneous rates, we can
write
^
S
i
in
(t
8
1 s)S
out
(t
8
)
&
'
^
S
i
in
&
(t
8
1 s)
^
S
out
&
(t
8
). Second,
if these rates change slowly as compared to T, then we have
C
i
(s;t)'
n
i
in
(t1 s)
n
out
(t). In addition,
n
5
^
S
&
is the time
evolution on a slow time scale; cf. the discussion after Eq.
~3!. Since we have T@ W, the rates
n
i
in
also change slowly as
compared to the width W of the learning window and, thus,
we may replace
n
i
in
(t1 s)by
n
i
in
(t) in the correlation term
*
2 `
`
dsW(s) C
i
(s;t). This yields
*
2 `
`
dsW(s)C
i
(s;t)
'W
˜
(0)
n
i
in
(t)
n
out
(t), where W
˜
(0)ª
*
2`
`
dsW(s). Under the
above assumptions we can identify W
˜
(0) with a
3
. By fur-
ther comparison of Eq. ~4! with Eq. ~5! we identify w
in
with
a
1
and w
out
with a
2
, and we are able to reduce Eq. ~4! to Eq.
~5! by setting a
0
5 a
4
5 a
5
5 0.
C. Limitations of rate-based Hebbian learning
The assumptions necessary to derive Eq. ~5! from Eq. ~4!,
however, are not generally valid. According to the results of
Markram et al. @25#, the width W of the Hebbian learning
window in cortical pyramidal cells is in the range of 100 ms.
At retinotectal synapses W is also in the range of 100 ms
@26#.
A mean rate formulation thus requires that all changes of
the activity are slow at a time scale of 100 ms. This is not
necessarily the case. The existence of oscillatory activity in
the cortex in the range of 40 Hz ~e.g., @14,15,20,48#! implies
activity changes every 25 ms. Retinal ganglion cells fire
synchronously at a time scale of about 10 ms @49#; cf. also
@50#. Much faster activity changes are found in the auditory
system. In the auditory pathway of, e.g., the barn owl, spikes
can be phase-locked to frequencies of up to 8 kHz @51–53#.
Furthermore, beyond the correlations between instantaneous
rates, additional correlations between spikes may exist.
Because of all the above reasons, the learning rule ~5! in
the simple rate formulation is insufficient to provide a gen-
erally valid description. In Secs. IV and V we will therefore
study the full spike-based learning equation ~4!.
IV. STOCHASTICALLY SPIKING NEURONS
A crucial step in analyzing Eq. ~4! is determining the
correlations C
i
between input spikes at synapse i and output
spikes. The correlations, of course, depend strongly on the
neuron model under consideration. To highlight the main
points of learning, we study a simple toy model. Input spikes
are generated by an inhomogeneous Poisson process and fed
into a stochastically firing neuron model. For this scenario
we are able to derive an analytical expression for the corre-
lations between input and output spikes. The introduction of
the model and the derivation of the correlation function is the
topic of the first subsection. In the second subsection we use
the correlation function in the learning equation ~4! and ana-
lyze the learning dynamics. In the final two subsections the
relation to the work of Linsker @3#~a rate formulation of
Hebbian learning! and some extensions based on spike cod-
ing are considered.
A. Poisson input and stochastic neuron model
We consider a single neuron which receives input via N
synapses 1<i<N. The input spike trains arriving at the N
synapses are statistically independent and generated by an
inhomogeneous Poisson process with time-dependent inten-
sities
^
S
i
in
&
(t)5 l
i
in
(t) with 1<i<N @46#.
In our simple neuron model we assume that output spikes
are generated stochastically with a time-dependent rate
l
out
(t) that depends on the timing of input spikes. Each input
spike arriving at synapse i at time t
i
f
increases ~or decreases!
the instantaneous firing rate l
out
by an amount J
i
(t
i
f
)
e
(t
2 t
i
f
), where
e
is a response kernel. The effect of an incom-
ing spike is thus a change in probability density proportional
to J
i
. Causality is imposed by the requirement
e
(s)5 0 for
s, 0. In biological terms, the kernel
e
may be identified with
an excitatory ~or inhibitory! postsynaptic potential. In
throughout what follows, we assume excitatory couplings
J
i
. 0 for all i and
e
(s)>0 for all s. In addition, the response
kernel
e
(s) is normalized to
*
ds
e
(s)51; cf. Fig. 4.
The contributions from all N synapses as measured at the
axon hillock are assumed to add up linearly. The result gives
rise to a linear inhomogeneous Poisson model with intensity
l
out
~
t
!
5
n
0
1
(
i51
N
(
f
J
i
~
t
i
f
!
e
~
t2 t
i
f
!
. ~6!
FIG. 4. The postsynaptic potential
e
in units of
@
e
2 1
t
0
2 1
#
as a
function of time s in milliseconds. We have
e
[0 for s, 0 so that
e
is causal. The kernel
e
has a single maximum at s5
t
0
. For s
` the postsynaptic potential
e
decays exponentially with time
constant
t
0
; cf. also Appendix B 2.
PRE 59
4501HEBBIAN LEARNING AND SPIKING NEURONS

Here,
n
0
is the spontaneous firing rate and the sums run over
all spike arrival times at all synapses. By definition, the spike
generation process ~6! is independent of previous output
spikes. In particular, this Poisson model does not include
refractoriness.
In the context of Eq. ~4!, we are interested in ensemble
averages over both the input and the output. Since Eq. ~6! is
a linear equation, the average can be performed directly and
yields
^
S
out
&
~
t
!
5
n
0
1
(
i51
N
J
i
~
t
!
E
0
`
ds
e
~
s
!
l
i
in
~
t2 s
!
. ~7!
In deriving Eq. ~7! we have replaced J
i
(t
i
f
)byJ
i
(t) because
efficacies are assumed to change adiabatically with respect to
the width of
e
. The ensemble-averaged output rate in Eq. ~7!
depends on the convolution of
e
with the input rates. In what
follows we denote
L
i
in
~
t
!
5
E
0
`
ds
e
~
s
!
l
i
in
~
t2 s
!
. ~8!
Equation ~7! may suggest that input and output spikes are
statistically independent, which is not the case. To show this
explicitly, we determine the ensemble-averaged correlation
^
S
i
in
(t1 s)S
out
(t)
&
in Eq. ~3!. Since
^
S
i
in
(t1 s)S
out
(t)
&
corre-
sponds to a joint probability, it equals the probability density
l
i
in
(t1 s) for an input spike at synapse i at time t1 s times
the conditional probability density of observing an output
spike at time t given the above input spike at t1 s,
^
S
i
in
~
t1 s
!
S
out
~
t
!
&
5 l
i
in
~
t1 s
!
F
n
0
1 J
i
~
t
!
e
~
2 s
!
1
(
j5 1
N
J
j
~
t
!
L
j
in
~
t
!
G
.
~9!
The first term inside the square brackets is the spontaneous
output rate and the second term is the specific contribution
caused by the input spike at time t1 s, which vanishes for
s. 0. We are allowed to write J
i
(t) instead of the ‘‘correct’’
weight J
i
(t1 s); cf. the remark after Eq. ~7!. To understand
the meaning of the second term, we recall that an input spike
arriving before an output spike ~i.e., s, 0) raises the output
firing rate by an amount proportional to
e
(2 s); cf. Fig. 5.
The sum in Eq. ~9! contains the mean contributions of all
synapses to an output spike at time t. For the proof of Eq.
~9!, we refer to Appendix A.
Inserting Eq. ~9! into Eq. ~3! we obtain
C
i
~
s;t
!
5
(
j5 1
N
J
j
~
t
!
l
i
in
~
t1 s
!
L
j
in
~
t
!
1 l
i
in
~
t1 s
!
@
n
0
1 J
i
~
t
!
e
~
2 s
!
#
, ~10!
where we have assumed the weights J
j
to be constant in the
time interval
@
t,t1 T
#
. Temporal averages are denoted by a
bar; cf. Sec. II C. Note that
l
i
in
(t)5
n
i
in
(t).
B. Learning equation
Before inserting the correlation function ~10! into the
learning rule ~4! we define the covariance matrix
q
ij
~
s;t
!
ª
@
l
i
in
~
t1 s
!
2
n
i
in
~
t1 s
!
#@
L
j
in
~
t
!
2
n
j
in
~
t
!
#
~11!
and its convolution with the learning window W,
Q
ij
~
t
!
ª
E
2`
`
dsW
~
s
!
q
ij
~
s;t
!
. ~12!
Using Eqs. ~7!, ~10!, and ~12! in Eq. ~4!, we obtain
J
˙
i
5 w
in
n
i
in
1 w
out
F
n
0
1
(
j
J
j
n
j
G
1 W
˜
~
0
!
n
i
in
n
0
1
(
j
J
j
F
Q
ij
1W
˜
~
0
!
n
i
in
n
j
in
1
d
ij
n
i
in
E
2 `
`
dsW
~
s
!
e
~
2s
!
G
.
~13!
For the sake of brevity, we have omitted the dependence
upon time.
The assumption of identical and constant mean input
rates,
n
i
in
(t)5
n
in
for all i, reduces the number of free param-
eters in Eq. ~13! considerably and eliminates all effects com-
ing from rate coding. We define
k
1
5
@
w
out
1 W
˜
~
0
!
n
in
#
n
0
1 w
in
n
in
,
k
2
5
@
w
out
1 W
˜
~
0
!
n
in
#
n
in
, ~14!
k
3
5
n
in
E
dsW
~
s
!
e
~
2s
!
in Eq. ~13! and arrive at
J
˙
i
5 k
1
1
(
j
~
Q
ij
1k
2
1k
3
d
ij
!
J
j
. ~15!
FIG. 5. Spike-spike correlations. To understand the meaning of
Eq. ~9! we have sketched
^
S
i
in
(t
8
)S
out
(t)
&
/l
i
in
(t
8
) as a function of
time t ~full line!. The dot-dashed line at the bottom of the graph is
the contribution J
i
(t
8
)
e
(t2t
8
) of an input spike occurring at time
t
8
. Adding this contribution to the mean rate contribution,
n
0
1 (
j
J
j
(t)L
j
in
(t) ~dashed line!, we obtain the rate inside the square
brackets of Eq. ~9!~full line!. At time t
9
. t
8
the input spike at time
t
8
enhances the output firing rate by an amount J
i
(t
8
)
e
(t
9
2 t
8
)
~arrows!. Note that in the main text we have taken t
9
2 t
8
52s.
4502 PRE 59
KEMPTER, GERSTNER, AND van HEMMEN

Figures
Citations
More filters
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Journal ArticleDOI

Stochastic Processes in Physics and Chemistry

D Sherrington
- 01 Apr 1983 - 
TL;DR: Van Kampen as mentioned in this paper provides an extensive graduate-level introduction which is clear, cautious, interesting and readable, and could be expected to become an essential part of the library of every physical scientist concerned with problems involving fluctuations and stochastic processes.
Book

Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems

Peter Dayan, +1 more
TL;DR: This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory.
Book

Spiking Neuron Models: Single Neurons, Populations, Plasticity

TL;DR: A comparison of single and two-dimensional neuron models for spiking neuron models and models of Synaptic Plasticity shows that the former are superior to the latter, while the latter are better suited to population models.
Journal ArticleDOI

Competitive Hebbian learning through spike-timing-dependent synaptic plasticity

TL;DR: In modeling studies, it is found that this form of synaptic modification can automatically balance synaptic strengths to make postsynaptic firing irregular but more sensitive to presynaptic spike timing.
References
More filters
Journal ArticleDOI

A synaptic model of memory: long-term potentiation in the hippocampus

TL;DR: The best understood form of long-term potentiation is induced by the activation of the N-methyl-d-aspartate receptor complex, which allows electrical events at the postsynaptic membrane to be transduced into chemical signals which, in turn, are thought to activate both pre- and post Synaptic mechanisms to generate a persistent increase in synaptic strength.
Book

The organization of behavior

D. O. Hebb
Book

Self Organization And Associative Memory

Teuvo Kohonen
TL;DR: The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Book

Stochastic processes in physics and chemistry

TL;DR: In this article, the authors introduce the Fokker-planck equation, the Langevin approach, and the diffusion type of the master equation, as well as the statistics of jump events.

Stochastic Processes in Physics and Chemistry

Abstract: Preface to the first edition. Preface to the second edition. Abbreviated references. I. Stochastic variables. II. Random events. III. Stochastic processes. IV. Markov processes. V. The master equation. VI. One-step processes. VII. Chemical reactions. VIII. The Fokker-Planck equation. IX. The Langevin approach. X. The expansion of the master equation. XI. The diffusion type. XII. First-passage problems. XIII. Unstable systems. XIV. Fluctuations in continuous systems. XV. The statistics of jump events. XVI. Stochastic differential equations. XVII. Stochastic behavior of quantum systems.
Related Papers (5)
Frequently Asked Questions (8)
Q1. What contributions have the authors mentioned in the paper "Hebbian learning and spiking neurons" ?

In this paper, a correlation-based learning rule at a spike level with millisecond resolution is formulated, mathematically analyzed, and compared with learning in a firing-rate description. 

Given that input spiking is random but partially correlated and that the generation of spikes is in general a complicated dynamic process, an analysis of Eq. ~1! is a formidable problem. 

the correlation function ^Siin(t9)Sout(t8)& is to be interpreted as the joint probability density for observing an input spike at synapse i at the time t9 and an output spike at time t8. 

According to the results of Markram et al. @25#, the width W of the Hebbian learning window in cortical pyramidal cells is in the range of 100 ms. 

Denoting the input spike train at synapse i by a series of d functions, Si in(t)5( fd(t2t i f), and, similarly, output spikes by Sout(t)5(nd(t2t n), the authors can formulate the rules ~i!–~iii! 

The authors rewrite Eq. ~20! in the standard form J̇av5@J* av2Jav#/tav, whereJ * av52k1 /@N~k21Q av!# ~21!is the fixed point for the average weight andtav5J * av/k1521/@N~k21Q av!# ~22!is the time constant of normalization. 

Even if the average weight Jav approaches a fixed point J* av , there is no restric-tion for the size of individual weights, apart from Ji>0 for excitatory synapses and Ji&N J*av . 

A numerical example confirming the above theoretical considerations is presented in Fig. 9. Simulation parameters are as given in Appendix B.Up to this point the authors have neglected the influence of the k3 term in Eq. ~15!, which may lead to a stabilization of weight distributions, in particular when synapses are few and strong @22,54#; cf. Sec. IV D.