scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Spontaneous activity emerging from an inferred network model captures complex spatio-temporal dynamics of spike data

19 Nov 2018-Scientific Reports (Nature Publishing Group)-Vol. 8, Iss: 1, pp 17056-17056
TL;DR: A principled modification of a widely used generalized linear model is introduced, and its structural and dynamic parameters from in-vitro spike data are learned, and the spontaneous activity of the new model captures prominent features of the non-stationary and non-linear dynamics displayed by the biological network.
Abstract: Inference methods are widely used to recover effective models from observed data. However, few studies attempted to investigate the dynamics of inferred models in neuroscience, and none, to our knowledge, at the network level. We introduce a principled modification of a widely used generalized linear model (GLM), and learn its structural and dynamic parameters from in-vitro spike data. The spontaneous activity of the new model captures prominent features of the non-stationary and non-linear dynamics displayed by the biological network, where the reference GLM largely fails, and also reflects fine-grained spatio-temporal dynamical features. Two ingredients were key for success. The first is a saturating transfer function: beyond its biological plausibility, it limits the neuron’s information transfer, improving robustness against endogenous and external noise. The second is a super-Poisson spikes generative mechanism; it accounts for the undersampling of the network, and allows the model neuron to flexibly incorporate the observed activity fluctuations.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
16 Feb 2021-PLOS ONE
TL;DR: In this paper, a learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks, which facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence.
Abstract: Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and their training requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs, aiming to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed, but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal randomly projected from an external origin (e.g. from other brain areas) directly defines the target sequence. This facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence. We propose two versions of our learning rule: spike-dependent and voltage-dependent. We find that the latter provides remarkable benefits in terms of learning speed and robustness to noise. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation.

8 citations

Posted ContentDOI
01 Mar 2019-bioRxiv
TL;DR: Results show that a state-dependent approximation can be successfully introduced in order to take into account the subtle effects of COBA integration and to deal with a theory capable to correctly predicts the activity in regimes of alternating states like slow oscillations.
Abstract: Higher and higher interest has been shown in the recent years to large scale spiking simulations of cerebral neuronal networks, coming both from the presence of high performance computers and increasing details in the experimental observations. In this context it is important to understand how population dynamics are generated by the designed parameters of the networks, that is the question addressed by mean field theories. Despite analytic solutions for the mean field dynamics has already been proposed generally for current based neurons (CUBA), the same for more realistic neural properties, such as conductance based (COBA) network of adaptive exponential neurons (AdEx), a complete analytic model has not been achieved yet. Here, we propose a novel principled approach to map a COBA on a CUBA. Such approach provides a state-dependent approximation capable to reliably predict the firing rate properties of an AdEx neuron with non-instantaneous COBA integration. We also applied our theory to population dynamics, predicting the dynamical properties of the network in very different regimes, such as asynchronous irregular (AI) and synchronous irregular (SI) (slow oscillations, SO). This results show that a state-dependent approximation can be successfully introduced in order to take into account the subtle effects of COBA integration and to deal with a theory capable to correctly predicts the activity in regimes of alternating states like slow oscillations.

8 citations


Additional excerpts

  • ...[2, 3, 4, 5]....

    [...]

Journal ArticleDOI
TL;DR: The mechanistic and biological coverage of existing NAMs for DART were assessed and gaps to be addressed, allowing the development of an approach that relies on generating data relevant to the overall mechanisms involved in human reproduction and embryo-foetal development.
Abstract: New Approach Methodologies (NAMs) promise to offer a unique opportunity to enable human-relevant safety decisions to be made without the need for animal testing in the context of exposure-driven Next Generation Risk Assessment (NGRA). Protecting human health against the potential effects a chemical may have on embryo-foetal development and/or aspects of reproductive biology using NGRA is particularly challenging. These are not single endpoint or health effects and risk assessments have traditionally relied on data from Developmental and Reproductive Toxicity (DART) tests in animals. There are numerous Adverse Outcome Pathways (AOPs) that can lead to DART, which means defining and developing strict testing strategies for every AOP, to predict apical outcomes, is neither a tenable goal nor a necessity to ensure NAM-based safety assessments are fit-for-purpose. Instead, a pragmatic approach is needed that uses the available knowledge and data to ensure NAM-based exposure-led safety assessments are sufficiently protective. To this end, the mechanistic and biological coverage of existing NAMs for DART were assessed and gaps to be addressed were identified, allowing the development of an approach that relies on generating data relevant to the overall mechanisms involved in human reproduction and embryo-foetal development. Using the knowledge of cellular processes and signalling pathways underlying the key stages in reproduction and development, we have developed a broad outline of endpoints informative of DART. When the existing NAMs were compared against this outline to determine whether they provide comprehensive coverage when integrated in a framework, we found them to generally cover the reproductive and developmental processes underlying the traditionally evaluated apical endpoint studies. The application of this safety assessment framework is illustrated using an exposure-led case study.

8 citations

Journal ArticleDOI
TL;DR: This result shows that a state-dependent approximation can be successfully introduced to take into account the subtle effects of COBA integration and to deal with a theory capable of correctly predicting the activity in regimes of alternating states like slow oscillations.
Abstract: More interest has been shown in recent years to large-scale spiking simulations of cerebral neuronal networks, coming both from the presence of high-performance computers and increasing details in experimental observations. In this context it is important to understand how population dynamics are generated by the designed parameters of the networks, which is the question addressed by mean-field theories. Despite analytic solutions for the mean-field dynamics already being proposed for current-based neurons (CUBA), a complete analytic description has not been achieved yet for more realistic neural properties, such as conductance-based (COBA) network of adaptive exponential neurons (AdEx). Here, we propose a principled approach to map a COBA on a CUBA. Such an approach provides a state-dependent approximation capable of reliably predicting the firing-rate properties of an AdEx neuron with noninstantaneous COBA integration. We also applied our theory to population dynamics, predicting the dynamical properties of the network in very different regimes, such as asynchronous irregular and synchronous irregular (slow oscillations). This result shows that a state-dependent approximation can be successfully introduced to take into account the subtle effects of COBA integration and to deal with a theory capable of correctly predicting the activity in regimes of alternating states like slow oscillations.

6 citations

Journal ArticleDOI
TL;DR: In this paper, the role of NSCs in the neuronal activity of a pre-existing hippocampal in vitro network grown on microelectrode arrays was investigated, and it was shown that additional stem cells prolonged network bursts with longer intervals, generated a lower number of initiating patterns, and decreased synchronization among neurons.
Abstract: Objective Neural stem cells (NSCs) are continuously produced throughout life in the hippocampus, which is a vital structure for learning and memory. NSCs in the brain incorporate into the functional hippocampal circuits and contribute to processing information. However, little is known about the mechanisms of NSCs' activity in a pre-existing neuronal network. Here, we investigate the role of NSCs in the neuronal activity of a pre-existing hippocampal in vitro network grown on microelectrode arrays. Approach We assessed the change in internal dynamics of the network by additional NSCs based on spontaneous activity. We also evaluated the networks' ability to discriminate between different input patterns by measuring evoked activity in response to external inputs. Main results Analysis of spontaneous activity revealed that additional NSCs prolonged network bursts with longer intervals, generated a lower number of initiating patterns, and decreased synchronization among neurons. Moreover, the network with NSCs showed higher synchronicity in close connections among neurons responding to external inputs and a larger difference in spike counts and cross-correlations during evoked response between two different inputs. Taken together, our results suggested that NSCs alter the internal dynamics of the pre-existing hippocampal network and produce more specific responses to external inputs, thus enhancing the ability of the network to differentiate two different inputs. Significance We demonstrated that NSCs improve the ability to distinguish external inputs by modulating the internal dynamics of a pre-existing network in a hippocampal culture. Our results provide novel insights into the relationship between NSCs and learning and memory.

4 citations

References
More filters
Proceedings Article
21 Jun 2010
TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Abstract: Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these "Stepped Sigmoid Units" are unchanged. They can be approximated efficiently by noisy, rectified linear units. Compared with binary units, these units learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset. Unlike binary units, rectified linear units preserve information about relative intensities as information travels through multiple layers of feature detectors.

14,799 citations

Journal ArticleDOI
TL;DR: It is suggested that quantities are represented as rate codes in ensembles of 50–100 neurons, which implies that single neurons perform simple algebra resembling averaging, and that more sophisticated computations arise by virtue of the anatomical convergence of novel combinations of inputs to the cortical column from external sources.
Abstract: Cortical neurons exhibit tremendous variability in the number and temporal distribution of spikes in their discharge patterns. Furthermore, this variability appears to be conserved over large regions of the cerebral cortex, suggesting that it is neither reduced nor expanded from stage to stage within a processing pathway. To investigate the principles underlying such statistical homogeneity, we have analyzed a model of synaptic integration incorporating a highly simplified integrate and fire mechanism with decay. We analyzed a “high-input regime” in which neurons receive hundreds of excitatory synaptic inputs during each interspike interval. To produce a graded response in this regime, the neuron must balance excitation with inhibition. We find that a simple integrate and fire mechanism with balanced excitation and inhibition produces a highly variable interspike interval, consistent with experimental data. Detailed information about the temporal pattern of synaptic inputs cannot be recovered from the pattern of output spikes, and we infer that cortical neurons are unlikely to transmit information in the temporal pattern of spike discharge. Rather, we suggest that quantities are represented as rate codes in ensembles of 50‐ 100 neurons. These column-like ensembles tolerate large fractions of common synaptic input and yet covary only weakly in their spike discharge. We find that an ensemble of 100 neurons provides a reliable estimate of rate in just one interspike interval (10‐50 msec). Finally, we derived an expression for the variance of the neural spike count that leads to a stable propagation of signal and noise in networks of neurons—that is, conditions that do not impose an accumulation or diminution of noise. The solution implies that single neurons perform simple algebra resembling averaging, and that more sophisticated computations arise by virtue of the anatomical convergence of novel combinations of inputs to the cortical column from external sources.

2,204 citations

Journal ArticleDOI
20 Apr 2006-Nature
TL;DR: It is shown, in the vertebrate retina, that weak correlations between pairs of neurons coexist with strongly collective behaviour in the responses of ten or more neurons, and it is found that this collective behaviour is described quantitatively by models that capture the observed pairwise correlations but assume no higher-order interactions.
Abstract: Biological networks have so many possible states that exhaustive sampling is impossible. Successful analysis thus depends on simplifying hypotheses, but experiments on many systems hint that complicated, higher-order interactions among large groups of elements have an important role. Here we show, in the vertebrate retina, that weak correlations between pairs of neurons coexist with strongly collective behaviour in the responses of ten or more neurons. We find that this collective behaviour is described quantitatively by models that capture the observed pairwise correlations but assume no higher-order interactions. These maximum entropy models are equivalent to Ising models, and predict that larger networks are completely dominated by correlation effects. This suggests that the neural code has associative or error-correcting properties, and we provide preliminary evidence for such behaviour. As a first test for the generality of these ideas, we show that similar results are obtained from networks of cultured cortical neurons.

1,788 citations

Journal ArticleDOI
21 Aug 2008-Nature
TL;DR: The functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells is analysed using a model of multi-neuron spike responses, and a model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlation activity in populations of neurons.
Abstract: Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons.

1,465 citations

Journal ArticleDOI
TL;DR: A statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior.
Abstract: Multiple factors simultaneously affect the spiking activity of individual neurons. Determining the effects and relative importance of these factors is a challenging problem in neurophysiology. We propose a statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior. The framework uses parametric models of the conditional intensity function to define a neuron's spiking probability in terms of the covariates. The discrete time likelihood function for point processes is used to carry out model fitting and model analysis. We show that, by modeling the logarithm of the conditional intensity function as a linear combination of functions of the covariates, the discrete time point process likelihood function is readily analyzed in the generalized linear model (GLM) framework. We illustrate our approach for both GLM and non-GLM likelihood functions using simulated data and multivariate single-unit activity data simultaneously recorded from the motor cortex of a monkey performing a visuomotor pursuit-tracking task. The point process framework provides a flexible, computationally efficient approach for maximum likelihood estimation, goodness-of-fit assessment, residual analysis, model selection, and neural decoding. The framework thus allows for the formulation and analysis of point process models of neural spiking activity that readily capture the simultaneous effects of multiple covariates and enables the assessment of their relative importance.

982 citations