scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A Black-box Model for Neurons

TL;DR: It is shown that, after training the artificial network with biologically plausible input currents, the network is able to identify the neuron’s behaviour with high accuracy, thus obtaining a black box that can be then used for predictive goals.
Abstract: We explore the identification of neuronal voltage traces by artificial neural networks based on wavelets (Wavenet). More precisely, we apply a modification in the representation of dynamical systems by Wavenet which decreases the number of used functions; this approach combines localized and global scope functions (unlike Wavenet, which uses localized functions only). As a proof-of-concept, we focus on the identification of voltage traces obtained by simulation of a paradigmatic neuron model, the Morris-Lecar model. We show that, after training our artificial network with biologically plausible input currents, the network is able to identify the neuron’s behaviour with high accuracy, thus obtaining a black box that can be then used for predictive goals. Interestingly, the interval of input currents used for training, ranging from stimuli for which the neuron is quiescent to stimuli that elicit spikes, shows the ability of our network to identify abrupt changes in the bifurcation diagram, from almost linear input-output relationships to highly nonlinear ones. These findings open new avenues to investigate the identification of other neuron models and to provide heuristic models for real neurons by stimulating them in closed-loop experiments, that is, using the dynamic-clamp, a well-known electrophysiology technique.

Summary (1 min read)

Introduction

  • Neurons are the basic information processing structures in the brain.
  • There is a vast literature on modeling of such intrinsic features, see for instance [3] for a thorough treatment.
  • A plethora of experiments has since been devoted to provide specific models by identifying and quantifying the ionic channels, giving rise to very precise biophysical models now available to the computational neuroscience community.
  • Thus, the problem of identification and cell classification from voltage traces is fundamental to experimental neuroscience, see [5] where the problem of detection, time-estimation, and cell classification is treated in order to sort neural action potentials.
  • The identification method decreases the number of used functions in Wavenet by combining localized and global scope functions instead of only localized functions.

III. THE NETWORK CHARACTERISTICS

  • Great advances were made in the last years in analysis and identification of dynamical systems using non-linear models originated from artificial intelligence.
  • Mathematically, they are complex models, whose structure is empirically determined.
  • Network structure parameters and training method are determined by error and trial or Heuristic.
  • In the multiresolution frame, the approximation of a function f(x) is made through its projections to shifted and compressed versions of a basic function, known as “wavelet mother”.
  • Training data are initially approximated with activation functions (scale functions), whose support is equal to the problem domain support (global scope functions), different from the originally proposed wavenet, which uses localized functions only.

A. Dynamical system identification

  • The authors deal with the identification of the neuronal voltage traces of the Morris-Lecar model proposed in [1].
  • The steps followed in the identification process were: 1) Acquisition of data group for fitting (Training Patterns): data were obtained solving system 2.
  • As a measure criterion, the smaller quadratic error with the smaller number of variables was considered.
  • 3) The validation trough dynamic prediction, which corresponds to the prediction of an arbitrary number of steps forward.
  • In relation to the other points, only the information of the perturbation variable is used, as external information, and a feedback of the output variables is performed.

B. Simulation results

  • The neural network was trained by defining the Iapp current as an independent variable.
  • Iapp is defined as a piecewise constant signal with 50 levels randomly defined with a uniform distribution.
  • The value of the constant changes every 2000 integration steps .
  • The solutions of the differential equations the model and the neural network prediction are depicted (actually overlapped) in the next figures.
  • As it can be verified from the results presented, the prediction in both subthreshold and trigger conditions is satisfactory.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

A Black-box Model for Neurons
N. Roqueiro
Federal University of Santa Catarina
Florianopolis (Brasil)
nestor.roqueiro@ufsc.br
C. Claumann
Researcher
Florianopolis (Brasil)
carlos.claumann@posgrad.ufsc.br
A. Guillamon, E. Fossas
Universitat Polit
`
ecnica de Catalunya
Barcelona (Spain)
{antoni.guillamon,enric.fossas}@upc.edu
Abstract—We explore the identification of neuronal voltage
traces by artificial neural networks based on wavelets (Wavenet).
More precisely, we apply a modification in the representation
of dynamical systems by Wavenet which decreases the number
of used functions; this approach combines localized and global
scope functions (unlike Wavenet, which uses localized functions
only). As a proof-of-concept, we focus on the identification of
voltage traces obtained by simulation of a paradigmatic neuron
model, the Morris-Lecar model. We show that, after training
our artificial network with biologically plausible input currents,
the network is able to identify the neuron’s behaviour with
high accuracy, thus obtaining a black box that can be then
used for predictive goals. Interestingly, the interval of input
currents used for training, ranging from stimuli for which the
neuron is quiescent to stimuli that elicit spikes, shows the ability
of our network to identify abrupt changes in the bifurcation
diagram, from almost linear input-output relationships to highly
nonlinear ones. These findings open new avenues to investigate
the identification of other neuron models and to provide heuristic
models for real neurons by stimulating them in closed-loop
experiments, that is, using the dynamic-clamp, a well-known
electrophysiology technique.
I. INTRODUCTION
Neurons are the basic information processing structures in
the brain. Through synapses or direct stimulation, neurons
receive input signals that are shaped within the cell according
to its intrinsic properties: morphology, ionic channels, avail-
ability of neurotransmitters,. . . The main observable of these
transformations are voltage changes in the membrane potential
(i.e., the difference in electrical potential between the interior
and the exterior of a neuron). There is a vast literature on
modeling of such intrinsic features, see for instance [3] for
a thorough treatment. The main formalism was introduced
by Hodgkin and Huxley [2] and consists of modeling the
membrane potential of the neuron with the help of Kirchhoffs
laws and first order kinetics describing the probability of spe-
cific ionic channels (sensitive to sodium, potassium, calcium
or other chemical elements) to be open/closed. A plethora
of experiments has since been devoted to provide specific
models by identifying and quantifying the ionic channels,
giving rise to very precise biophysical models now available
to the computational neuroscience community. However, these
are costly experiments that cannot be performed for every
single trace obtained in electrophysiology labs, and so there
exists a huge amount of experimental data not associated to a
biophysically-derived mathematical model. Thus, the problem
of identification and cell classification from voltage traces is
fundamental to experimental neuroscience, see [5] where the
problem of detection, time-estimation, and cell classification
is treated in order to sort neural action potentials. Here,
we use a computationally efficient modification of classical
Wavelet-based networks to identify dynamics of cells from
voltage traces. The identification method decreases the number
of used functions in Wavenet by combining localized and
global scope functions instead of only localized functions. In
order to test the goodness of this estimation tool, we have
chosen a benchmark neuron model, the Morris-Lecar model,
a paradigmatic biophysical model able to reproduce the main
states of a neuron, namely quiescent state and regular spiking,
with only constant current stimuli. We train the modified
wavenet using voltage traces obtained after applying a variable
input current stimulus, that sweeps a biologically plausible
interval, to the differential equations that define the Morris-
Lecar model. This procedure allows to identify the parameters
with a best fit to the data and, ultimately, it provides a black-
box model of the neuron that can be used as a predictive or
inference tool.
II. NEURON MODEL
As a benchmark neuron model, we consider the Morris-
Lecar model proposed in [1], which has been profusely used in
computational neuroscience as it models fundamental types of
neural dynamics while it is still feasible to make a qualitative
analysis of it, see [4]. The dynamics of the neuron is modeled
by a continuous-time dynamical system composed of the
current-balance equation for the membrane potential, v = v(t),
and the K
+
gating variable 0 w = w(t) 1, which
represents the probability of the K
+
ionic channel to be active:
C
m
dv
dt
= g
L
(v E
L
) g
K
w (v E
K
)
g
Ca
m
(v) (v E
Ca
) + I
app
,
dw
dt
= φ
w
(v)w
τ
w
(v)
.
(1)
The leakage, calcium, and potassium currents are of the form
I
L
= g
L
(v E
L
), I
Ca
= g
Ca
m
(v) (v E
Ca
), and
I
K
= g
K
w (v E
K
), respectively; g
L
, g
Ca
and g
K
are
the maximal conductances of each current, whereas E
L
, E
Ca
and E
K
denote the Nernst equilibrium potentials, for which
the corresponding current is zero, also known as reversal
potentials. The constant C
m
= 20 µF/cm
2
is the membrane
capacitance, φ = 1/15 is a dimensionless constant, I
app

-80
-60
-40
-20
0
20
40
60
80
0 20 40 60 80 100 120
v
max
(mV)
I
app
(µA/cm
2
)
Bifurcation diagram
I
bif
=39.9632
Stable steady state
Unstable steady state
Stable periodic orbit
Unstable periodic orbit
Fig. 1. Bifurcation diagram of system (1) in terms of the parameter I
app
. The
graph shows the maximal value of the variable v on the equilibrium points
or on the periodic orbits which, indeed, are limit cycles.
represents the (externally) applied current, which will be
variable in our simulations, and
m
(v) = (1 + tanh((v V
1
)/V
2
)) /2,
w
(v) = (1 + tanh((v V
3
)/V
4
)) /2,
τ
w
(v) = (cosh((v V
3
)/(2 V
4
)))
1
,
(2)
The following set of parameters have been used in our
computations, see [4]:
E
L
= 60, E
K
= 84, E
Ca
= 120 (mV ),
V
1
= 1.2, V
2
= 18, V
3
= 12, V
4
= 17.4 (mV ),
g
L
= 2, g
K
= 8.0, g
Ca
= 4.0 (mS/cm
2
).
(3)
Figure II, shows the bifurcation diagram of system (1) in
terms of the parameter I
app
. In the experiments, a prescribed
I
app
(t) that spans from 20 to 80 µA/cm
2
was used. Note that
for I
app
below I
bif
39.9632, there is one attractor, which is
an equilibrium point of the system, while for I
app
(I
bif
, 80),
also there is a unique attractor, which is a limit cycle (only the
maximal value of the variable v on the limit cycle is shown).
III. THE NETWORK CHARACTERISTICS
Great advances were made in the last years in analysis and
identification of dynamical systems using non-linear models
originated from artificial intelligence. In this area, models
obtained from syntax rules (fuzzy logic) and, mainly, those
ones that use activation functions (neural networks) are signi-
ficative. In the artificial intelligence approach, a neural network
comprises neurons layers, interconnected through weights.
Mathematically, they are complex models, whose structure is
empirically determined.
The most used neural network for control and non-linear
system identification is the feedforward. Nearly 90% of the
works found in the literature use this kind of network. A great
part of this success can be attributed to the iterative algorithm
used in the supervised training, known as Backpropagation
[6]. Nevertheless, systems identification can be very tiring,
due to the great number of network structure parameters
(number of hidden layers, number of neurons per layer) and
training method (weights, initial selection, learning factor
determination, moment rate and stopping criteria) [7]. Network
structure parameters and training method are determined by
error and trial or Heuristic.
Due to the great number of parameters and lack of math-
ematical basis, feedforward networks have been replaced by
non-linear models, but linear on its parameters. This structure
is very attractive, because training can be formalized as a
linear regression problem and, hence, solved by least squares.
Two kinds of non-linear networks, linear on its parameters,
have been used: radial base function networks (RBFN) and,
most recently, wavelet networks. RBFN have only one hidden
layer, whose neurons use activation functions, generally with
compact support and defined around centers [7]. Network
structure parameters are defined by number and center lo-
cation. Compared to feedforward, RBFN needs a smaller
number of parameters. Wavelet networks are made of localized
functions, as RBFN. However, they are better mathematically
based.
Wavelet networks use the multiresolution concept. [8].
Analysis in multiresolution is a structure of signal repre-
sentation, in different scales or resolutions. A signal in the
multiresolution frame is represented as the sum of successive
approximations, done from projections of this signal in spaces
defined in wavelets theory [9], [10].
The use of wavelets in approximations of functions and neu-
ral network construction came out with Bakshi [11], through
the wavenets and Zhang [12] with frame networks. In the
multiresolution frame, the approximation of a function f(x)
is made through its projections to shifted and compressed
versions of a basic function, known as “wavelet mother”.
Translations and compressions and, thus, location and size are
defined by wavelet theory. In this case, network training is
restricted to determine the coefficients (weights) relative to
projections. The issue is that the number of activation func-
tions of a wavenet grows in an exponential way, as the number
of inputs becomes larger. In addition, the activation functions
support decreases too much, in relation to the problem domain,
since in a wavenet the support of each multidimensional
activation function is obtained from the intersection of the
unidimensional localized function supports. Hence, there may
be functions with a very small number of points in its support,
causing numerical problems to training, mainly when there is
deficient data sampling.
In [17], an approach to reduce the number of activa-
tion functions in a wavenet is proposed. Training data are
initially approximated with activation functions (scale func-
tions), whose support is equal to the problem domain support
(global scope functions), different from the originally proposed
wavenet, which uses localized functions only. If approximation
is not adequate, thus wavelets with an increasing level of
location could be added, according to multiresolution.

0 1 2 3 4 5 6 7 8 9
10
4
10
20
30
40
50
60
70
80
90
100
Number of steps
Iapp mA
Fig. 2. Externally applied current
IV. THE WAVENET MODEL
A. Dynamical system identification
In this work, we deal with the identification of the neuronal
voltage traces of the Morris-Lecar model proposed in [1]. The
steps followed in the identification process were:
1) Acquisition of data group for fitting (Training Patterns):
data were obtained solving system 2.
2) Determination of the best network structure: the set of
input variables that better identifies the process has been
selected in this step. As a measure criterion, the smaller
quadratic error with the smaller number of variables was
considered.
3) The validation trough dynamic prediction, which corre-
sponds to the prediction of an arbitrary number of steps
forward. In this case, the first point of the validation
data group (initial condition) is used as an input to the
network. In relation to the other points, only the infor-
mation of the perturbation variable is used, as external
information, and a feedback of the output variables is
performed.
B. Simulation results
The neural network was trained by defining the I
app
current
as an independent variable. I
app
is defined as a piecewise
constant signal with 50 levels randomly defined with a uniform
distribution. The value of the constant changes every 2000
integration steps (Figure 2).
To verify the robustness of the identification, parameters g
L
and g
K
were modified with a uniformly distributed noise η(t)
as follows:
g
L
= g
L
(1 + η(t)),
g
K
= g
K
(1 + η(t));
(4)
The noise amplitude value considered in this experiment is
8%.
0 1 2 3 4 5 6 7 8 9
10
4
-60
-50
-40
-30
-20
-10
0
10
20
30
Number of steps
v mV
Fig. 3. Membrane voltage
0 1 2 3 4 5 6 7 8 9
10
4
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Number of steps
w mV
Fig. 4. Probability of the K
+
ionic channel to be active
To solve the differential equations of the Morris-Lecar
model, the Euler method was used with step T
s
= 0.05 ms
compatible with the sampling period of experimental data
acquisition systems.
The solutions of the differential equations the model and the
neural network prediction are depicted (actually overlapped)
in the next figures. Figure 3 shows the results corresponding
to variable v and Figure 4 shows the results corresponding to
variable w. As it can be verified from the results presented,
the prediction in both subthreshold and trigger conditions is
satisfactory. In Figure 5, the simulated and predicted variables
are shown simultaneously in a phase-plane representation. As
it can be appreciated the network is able to track, with high ac-
curacy, the oscillations elicited when I
app
visits the parameter
region with stable limit cycles, that is, when I
app
> I
bif
.

-60 -50 -40 -30 -20 -10 0 10 20 30
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
w
v
Fig. 5. (v, w) Phase-portrait
V. CONCLUSIONS AND FUTURE WORK
We have tested the performance of a modified wavenet
to identify parameters of a neuron model from its voltage
traces. We show that, after training our artificial network
with biologically plausible input currents, the network is able
to identify the neuron’s behaviour with high accuracy, thus
obtaining a black box that can be further used for predictive
goals. Interestingly, the interval of input currents used to train
the network includes both current levels for which the neuron
is quiescent and current levels that elicit spikes, which shows
the ability of our network to identify abrupt changes in the
bifurcation diagram, from almost linear input-output relation-
ships to highly nonlinear ones. In this work, the simulations
have been performed on a benchmark representative model,
with the aim of providing a proof of concept, but our procedure
can be easily extended to investigate the identification of other
neuron models encoding more sophisticated types of dynamics
(bursting, adaptation, mixed-mode oscillations, etc).
Even more importantly, our approach opens promising av-
enues when applied to real neurons, since it naturally leads
to a heuristic model of a neuron just stimulating it in closed-
loop experiments (that is, using the dynamic-clamp, a well-
known electrophysiology technique) and using the output data
to train the wavenet. In addition, this procedure would allow
to associate a heuristic model to available data in neuroscience
that has not gone through a careful channel dissection to
biophysically characterize the corresponding neuron, even if
the data does not come from electrophysiological recordings
of a single cell; for instance, it could also provide heuristic
models for populations of neurons. Finally, another application
of our approach is to infer, via inverse control methods,
the time course of the input current received by a neuron,
which essentially corresponds to the synaptic input and thus
provides a valuable information about neuron’s connectivity,
a paramount problem in neuroscience.
ACKNOWLEDGMENT
The authors would like to thank the Spanish grant
MINECO-FEDER-UE MTM-2015-71509-C2-2-R (AG) and
the Catalan Grant 2017SGR1049 (AG) 2017SGR0872 (EF).
This work has been done in a long-term stay of Prof. Roqueiro
at the UPC.
REFERENCES
[1] C. Morris and H. Lecar, Voltage Oscillations in the barnacle giant
muscle fiber, Biophys J. 35, 193–213, 1981.
[2] A. L. Hodgkin and A. F. Huxley, The components of membrane
conductance in the giant axon of Loligo, J Physiol. 116, 473–496, 1952.
[3] E. Izhikevich, Dynamical systems in neuroscience: the geometry of
excitability and bursting, MIT Press, 2006.
[4] J. R. Rinzel and G. B. Ermentrout, Analysis of neural excitability and
oscillations, in Methods in Neural Modeling, C. Koch and I. Segev Eds.,
135–169, MIT Press, 1998.
[5] C. Ekanadham, D. Tranchina, and E.P. Simoncelli. A unified framework
and method for automatic neural spike identification, Journal of Neuro-
science Methods, 222, 47—55, 2014.
[6] D.E. Rumelhart, J.L. McClelland. Parallel Distributed Procesing: Ex-
plorations in the Microstrucuture of Cognition. MIT Press, 1987
[7] S. Haikin. Neural Networks A Comprehensive Foundation. Prentice-
Hall, 1999
[8] S.A. Mallat. A Theory for Multiresolution Signal Decomposition: The
Wavelet Representation, IEEE Trans. Pat. Anal Mach. Intel., 11(7), 674–
693, 1989.
[9] I. Daubechies. Ten Lectures on Wavelets, SIAM, 1992.
[10] G. Strang, T. Nguyen. Wavelets and Filter Banks Wellesley-Cambridge
Press, 1996
[11] B.R. Bakshi, G. Stephanopoulos. Wave-Net: a Multiresolution, Hierar-
chical Neural Network with Localizad Learning, AIChE J., 39(1), 57–81.
[12] Q. Zhang, A.Benveniste. Wavelet Networks, IEEE Trans. Neural Net-
works, 3(6), 889–898.
[13] C.A. Claumann. Modelagem e controle de processos n
˜
ao lineares: Uma
Aplicac¸
˜
ao de Algoritmos Gen
´
eticos no Treinamento de Redes Neurais
Recorrentes. Dissertac¸
˜
ao de Mestrado, UFSC, Brasil.
[14] M. Pottmann, D.E. Seborg Identification of Nonlinear Process Using
Reciprocal Multiquadratic Functions J. Process Control, 21, 956-980.
[15] M.A. Henson, D.E. Seborg An Internal Model Control Strategy for
Nonlinear Systems, AIChE J., 37(7), 1991.
[16] N. Roqueiro. Redes de Wavelets na Modelagem de Processos n
˜
ao
Lineares. Tese de Doutorado - COPPE/UFRJ, 1995
[17] C.A. Claumann, Desenvolvimento e aplicac¸
˜
oes de redes neurais wavelets
e da teoria de regularizac¸
˜
ao na modelagem de processos. PhD Thesis,
UFSC, Florianopolis, Brasil, 2003
References
More filters
Book
01 Oct 2006
TL;DR: This book explains the relationship of electrophysiology, nonlinear dynamics, and the computational properties of neurons, with each concept presented in terms of both neuroscience and mathematics and illustrated using geometrical intuition, providing a link between the two disciplines.
Abstract: This book explains the relationship of electrophysiology, nonlinear dynamics, and the computational properties of neurons, with each concept presented in terms of both neuroscience and mathematics and illustrated using geometrical intuition In order to model neuronal behavior or to interpret the results of modeling studies, neuroscientists must call upon methods of nonlinear dynamics This book offers an introduction to nonlinear dynamical systems theory for researchers and graduate students in neuroscience It also provides an overview of neuroscience for mathematicians who want to learn the basic facts of electrophysiology "Dynamical Systems in Neuroscience" presents a systematic study of the relationship of electrophysiology, nonlinear dynamics, and computational properties of neurons It emphasizes that information processing in the brain depends not only on the electrophysiological properties of neurons but also on their dynamical properties The book introduces dynamical systems starting with one- and two-dimensional Hodgkin-Huxley-type models and continuing to a description of bursting systems Each chapter proceeds from the simple to the complex, and provides sample problems at the end The book explains all necessary mathematical concepts using geometrical intuition; it includes many figures and few equations, making it especially suitable for non-mathematicians Each concept is presented in terms of both neuroscience and mathematics, providing a link between the two disciplines Nonlinear dynamical systems theory is at the core of computational neuroscience research, but it is not a standard part of the graduate neuroscience curriculum - or taught by math or physics department in a way that is suitable for students of biology This book offers neuroscience students and researchers a comprehensive account of concepts and methods increasingly used in computational neuroscience

3,683 citations


"A Black-box Model for Neurons" refers background in this paper

  • ...There is a vast literature on modeling of such intrinsic features, see for instance [3] for a thorough treatment....

    [...]

Journal ArticleDOI
TL;DR: An analysis of the possible modes of behavior available to a system of two noninactivating conductance mechanisms, and a good correspondence to the types of behavior exhibited by barnacle fiber is indicated.

2,046 citations


"A Black-box Model for Neurons" refers methods in this paper

  • ...In this work, we deal with the identification of the neuronal voltage traces of the Morris-Lecar model proposed in [1]....

    [...]

  • ...As a benchmark neuron model, we consider the MorrisLecar model proposed in [1], which has been profusely used in computational neuroscience as it models fundamental types of neural dynamics while it is still feasible to make a qualitative analysis of it, see [4]....

    [...]

Journal ArticleDOI

1,349 citations


"A Black-box Model for Neurons" refers methods in this paper

  • ...The main formalism was introduced by Hodgkin and Huxley [2] and consists of modeling the membrane potential of the neuron with the help of Kirchhoff’s laws and first order kinetics describing the probability of specific ionic channels (sensitive to sodium, potassium, calcium or other chemical elements) to be open/closed....

    [...]

Book
01 Aug 1989

1,065 citations


"A Black-box Model for Neurons" refers methods in this paper

  • ...As a benchmark neuron model, we consider the MorrisLecar model proposed in [1], which has been profusely used in computational neuroscience as it models fundamental types of neural dynamics while it is still feasible to make a qualitative analysis of it, see [4]....

    [...]

  • ...The following set of parameters have been used in our computations, see [4]:...

    [...]

Journal ArticleDOI
TL;DR: This article presents the mathematical framework for the development of Wave-Nets and discusses the various aspects of their practical implementation and presents two examples on the application; the prediction of a chaotic time-series, representing population dynamics, and the classification of experimental data for process fault diagnosis.
Abstract: A Wave-Net is an artificial neural network with one hidden layer of nodes, whose basis functions are drawn from a family of orthonormal wavelets. The good localization characteristics of the basis functions, both in the input and frequency domains, allow hierarchical, multiresolution learning of input-output maps from experimental data. Furthermore, Wave-Nets allow explicit estimation for global and local prediction error-bounds, and thus lend themselves to a rigorous and explicit design of the network. This article presents the mathematical framework for the development of Wave-Nets and discusses the various aspects of their practical implementation. Computational complexity arguments prove that the training and adaptation efficiency of Wave-Nets is at least an order of magnitude better than other networks. In addition, it presents two examples on the application of Wave-Nets; (a) the prediction of a chaotic time-series, representing population dynamics, and (b) the classification of experimental data for process fault diagnosis.

240 citations

Frequently Asked Questions (1)
Q1. What contributions have the authors mentioned in the paper "A black-box model for neurons" ?

More precisely, the authors apply a modification in the representation of dynamical systems by Wavenet which decreases the number of used functions ; this approach combines localized and global scope functions ( unlike Wavenet, which uses localized functions only ). The authors show that, after training their artificial network with biologically plausible input currents, the network is able to identify the neuron ’ s behaviour with high accuracy, thus obtaining a black box that can be then used for predictive goals. These findings open new avenues to investigate the identification of other neuron models and to provide heuristic models for real neurons by stimulating them in closed-loop experiments, that is, using the dynamic-clamp, a well-known electrophysiology technique.