scispace - formally typeset
Search or ask a question
Journal ArticleDOI

MR. Estimator, a toolbox to determine intrinsic timescales from subsampled spiking activity.

29 Apr 2021-PLOS ONE (Public Library of Science (PLoS))-Vol. 16, Iss: 4
TL;DR: The Python toolbox “MR. Estimator” is presented to reliably estimate the intrinsic timescale from electrophysiologal recordings of heavily subsampled systems to investigate a functional hierarchy across the primate cortex and quantifies a system’s dynamic working point.
Abstract: Here we present our Python toolbox "MR. Estimator" to reliably estimate the intrinsic timescale from electrophysiologal recordings of heavily subsampled systems. Originally intended for the analysis of time series from neuronal spiking activity, our toolbox is applicable to a wide range of systems where subsampling-the difficulty to observe the whole system in full detail-limits our capability to record. Applications range from epidemic spreading to any system that can be represented by an autoregressive process. In the context of neuroscience, the intrinsic timescale can be thought of as the duration over which any perturbation reverberates within the network; it has been used as a key observable to investigate a functional hierarchy across the primate cortex and serves as a measure of working memory. It is also a proxy for the distance to criticality and quantifies a system's dynamic working point.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: It is proposed that rules that are capable of bringing the network to criticality can be classified by how long the near-critical dynamics persists after their disabling, and the role of self-organization and criticality in computation is discussed.
Abstract: Self-organized criticality has been proposed to be a universal mechanism for the emergence of scale-free dynamics in many complex systems, and possibly in the brain. While such scale-free patterns were identified experimentally in many different types of neural recordings, the biological principles behind their emergence remained unknown. Utilizing different network models and motivated by experimental observations, synaptic plasticity was proposed as a possible mechanism to self-organize brain dynamics towards a critical point. In this review, we discuss how various biologically plausible plasticity rules operating across multiple timescales are implemented in the models and how they alter the network's dynamical state through modification of number and strength of the connections between the neurons. Some of these rules help to stabilize criticality, some need additional mechanisms to prevent divergence from the critical state. We propose that rules that are capable of bringing the network to criticality can be classified by how long the near-critical dynamics persists after their disabling. Finally, we discuss the role of self-organization and criticality in computation. Overall, the concept of criticality helps to shed light on brain function and self-organization, yet the overall dynamics of living neural networks seem to harnesses not only criticality for computation, but also deviations thereof.

53 citations


Cites background from "MR. Estimator, a toolbox to determi..."

  • ...We are slowly learning how to deal with strong subsampling (under-observation) of the brain network [81, 122, 126, 150, 172, 181]....

    [...]

Journal ArticleDOI
TL;DR: Criticality is defined as the singular state of complex systems poised at the brink of a phase transition between order and randomness as discussed by the authors , i.e. the property of a process whose trajectory in phase space is sensitive to small differences in initial conditions.

46 citations

Journal ArticleDOI
TL;DR: In this article, the authors analyzed single-unit spike recordings from both the epileptogenic (focal) and the non-focal cortical hemispheres of 20 epilepsy patients and quantified the distance to instability in the framework of criticality.
Abstract: Epileptic seizures are characterized by abnormal and excessive neural activity, where cortical network dynamics seem to become unstable. However, most of the time, during seizure-free periods, cortex of epilepsy patients shows perfectly stable dynamics. This raises the question of how recurring instability can arise in the light of this stable default state. In this work, we examine two potential scenarios of seizure generation: (i) epileptic cortical areas might generally operate closer to instability, which would make epilepsy patients generally more susceptible to seizures, or (ii) epileptic cortical areas might drift systematically towards instability before seizure onset. We analyzed single-unit spike recordings from both the epileptogenic (focal) and the nonfocal cortical hemispheres of 20 epilepsy patients. We quantified the distance to instability in the framework of criticality, using a novel estimator, which enables an unbiased inference from a small set of recorded neurons. Surprisingly, we found no evidence for either scenario: Neither did focal areas generally operate closer to instability, nor were seizures preceded by a drift towards instability. In fact, our results from both pre-seizure and seizure-free intervals suggest that despite epilepsy, human cortex operates in the stable, slightly subcritical regime, just like cortex of other healthy mammalians.

14 citations

Journal ArticleDOI
TL;DR: A novel approach to quantify history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking, which captures a footprint of information processing that is beyond time-lagged measures of temporal dependence.
Abstract: Information processing can leave distinct footprints on the statistics of neural spiking. For example, efficient coding minimizes the statistical dependencies on the spiking history, while temporal integration of information may require the maintenance of information over different timescales. To investigate these footprints, we developed a novel approach to quantify history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking. This measure captures how much past information is necessary to predict current spiking. In contrast, classical time-lagged measures of temporal dependence like the autocorrelation capture how long-potentially redundant-past information can still be read out. Strikingly, we find for model neurons that our method disentangles the strength and timescale of history dependence, whereas the two are mixed in classical approaches. When applying the method to experimental data, which are necessarily of limited size, a reliable estimation of mutual information is only possible for a coarse temporal binning of past spiking, a so-called past embedding. To still account for the vastly different spiking statistics and potentially long history dependence of living neurons, we developed an embedding-optimization approach that does not only vary the number and size, but also an exponential stretching of past bins. For extra-cellular spike recordings, we found that the strength and timescale of history dependence indeed can vary independently across experimental preparations. While hippocampus indicated strong and long history dependence, in visual cortex it was weak and short, while in vitro the history dependence was strong but short. This work enables an information-theoretic characterization of history dependence in recorded spike trains, which captures a footprint of information processing that is beyond time-lagged measures of temporal dependence. To facilitate the application of the method, we provide practical guidelines and a toolbox.

9 citations

Journal ArticleDOI
TL;DR: In this paper , the authors give an overview of some issues arising from spatial subsampling and review approaches developed in recent years to tackle the subsamspling problem, and also outline what they believe are the main open challenges.
Abstract: Despite the development of large-scale data-acquisition techniques, experimental observations of complex systems are often limited to a tiny fraction of the system under study. This spatial subsampling is particularly severe in neuroscience, in which only a tiny fraction of millions or even billions of neurons can be individually recorded. Spatial subsampling may lead to substantial systematic biases when inferring the collective properties of the entire system naively from a subsampled part. To overcome such biases, powerful mathematical tools have been developed. In this Perspective, we give an overview of some issues arising from subsampling and review approaches developed in recent years to tackle the subsampling problem. These approaches enable one to correctly assess phenomena such as graph structures, collective dynamics of animals, neural network activity or the spread of disease from observing only a tiny fraction of the system. However, existing approaches are still far from having solved the subsampling problem in general, and we also outline what we believe are the main open challenges. Solving these challenges alongside the development of large-scale recording techniques will enable further fundamental insights into the workings of complex and living systems.

4 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, the authors show how to improve the performance of NumPy arrays through vectorizing calculations, avoiding copying data in memory, and minimizing operation counts, which is a technique similar to the one described in this paper.
Abstract: In the Python world, NumPy arrays are the standard representation for numerical data and enable efficient implementation of numerical computations in a high-level language. As this effort shows, NumPy performance can be improved through three techniques: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts.

9,149 citations

Journal ArticleDOI
16 Sep 2020-Nature
TL;DR: In this paper, the authors review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data, and their evolution into a flexible interoperability layer between increasingly specialized computational libraries is discussed.
Abstract: Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis. NumPy is the primary array programming library for Python; here its fundamental concepts are reviewed and its evolution into a flexible interoperability layer between increasingly specialized computational libraries is discussed.

7,624 citations

Book
01 Jan 1987
TL;DR: The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replication (half-sampling) Random Subsampling Nonparametric Confidence Intervals as mentioned in this paper.
Abstract: The Jackknife Estimate of Bias The Jackknife Estimate of Variance Bias of the Jackknife Variance Estimate The Bootstrap The Infinitesimal Jackknife The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replications (Half-Sampling) Random Subsampling Nonparametric Confidence Intervals.

7,007 citations

Journal ArticleDOI
TL;DR: This effort shows, NumPy performance can be improved through three techniques: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts.
Abstract: In the Python world, NumPy arrays are the standard representation for numerical data. Here, we show how these arrays enable efficient implementation of numerical computations in a high-level language. Overall, three techniques are applied to improve performance: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts. We first present the NumPy array structure, then show how to use it for efficient computation, and finally how to share array data with other libraries.

5,307 citations


"MR. Estimator, a toolbox to determi..." refers background or methods in this paper

  • ...The trial structure is incorporated in a two dimensional NumPy array [20, 21], where the first index (i ) labels the trial....

    [...]

  • ...Prepare data: After the toolbox is loaded, the input data needs to be in the right format: a 2DNumPy array [20, 21]....

    [...]