scispace - formally typeset
Open AccessJournal ArticleDOI

Learning neural connectivity from firing activity: efficient algorithms with provable guarantees on topology.

Reads0
Chats0
TLDR
A scalable learning mechanism is developed and the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIf) neurons matches the true underlying synaptic connections are derived.
Abstract
The connectivity of a neuronal network has a major effect on its functionality and role. It is generally believed that the complex network structure of the brain provides a physiological basis for information processing. Therefore, identifying the network’s topology has received a lot of attentions in neuroscience and has been the center of many research initiatives such as Human Connectome Project. Nevertheless, direct and invasive approaches that slice and observe the neural tissue have proven to be time consuming, complex and costly. As a result, the inverse methods that utilize firing activity of neurons in order to identify the (functional) connections have gained momentum recently, especially in light of rapid advances in recording technologies; It will soon be possible to simultaneously monitor the activities of tens of thousands of neurons in real time. While there are a number of excellent approaches that aim to identify the functional connections from firing activities, the scalability of the proposed techniques plays a major challenge in applying them on large-scale datasets of recorded firing activities. In exceptional cases where scalability has not been an issue, the theoretical performance guarantees are usually limited to a specific family of neurons or the type of firing activities. In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links/connections. We develop a scalable learning mechanism and derive the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIf) neurons matches the true underlying synaptic connections. We then validate the performance of the algorithm using artificially generated data (for benchmarking) and real data recorded from multiple hippocampal areas in rats.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

The lure of causal statements: Rampant mis-inference of causality in estimated connectivity

TL;DR: It is argued that mis-inferences of causality from correlation are augmented by an implicit redefinition of words that suggest mechanisms, such as connectivity, causality, and flow, which tell us little about mechanisms.
Posted Content

The lure of misleading causal statements in functional connectivity research

TL;DR: It is argued that mis-inferences of causality from correlation are augmented by an implicit redefinition of words that suggest mechanisms, such as connectivity, causality, and flow, which tell us little about mechanisms.
Posted Content

Online Neural Connectivity Estimation with Noisy Group Testing

TL;DR: In this article, a method based on noisy group testing was proposed to estimate functional connectivity between neurons using statistical modeling of observational data, but these approaches rely heavily on parametric assumptions and are purely correlational.
Proceedings ArticleDOI

Reconstructing Neural Network Topology from Firing Activity

TL;DR: A framework to reconstruct the neural network based on firing activity, which combines the basic principle of gradient descent and cross-correlation analysis, is proposed and the results suggest that the algorithm is feasible and effective for neural network reconstruction.
Posted Content

Online neural connectivity estimation with ensemble stimulation

TL;DR: By stimulating small ensembles of neurons, it is shown that it is possible to recover binarized network connectivity with a number of tests that grows only logarithmically with population size under minimal statistical assumptions, and it is proved that the approach can be related to Variational Bayesian inference on the binary connection weights.
References
More filters
Journal ArticleDOI

Collective dynamics of small-world networks

TL;DR: Simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder are explored, finding that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs.
Book

Spiking Neuron Models: Single Neurons, Populations, Plasticity

TL;DR: A comparison of single and two-dimensional neuron models for spiking neuron models and models of Synaptic Plasticity shows that the former are superior to the latter, while the latter are better suited to population models.
Journal ArticleDOI

Sparse Reconstruction by Separable Approximation

TL;DR: This work proposes iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian plus the original sparsity-inducing regularizer, and proves convergence of the proposed iterative algorithm to a minimum of the objective function.
Journal ArticleDOI

Large-scale recording of neuronal ensembles

TL;DR: Large-scale recordings from neuronal ensembles now offer the opportunity to test competing theoretical frameworks and require further development of the neuron–electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.
Book ChapterDOI

Spiking Neuron Models

TL;DR: Note: book Reference LCN-BOOK-2002-001 URL: http://diwww.epfl.ch/~gerstner/BUCH.html
Related Papers (5)