scispace - formally typeset
Search or ask a question

Showing papers by "John M. Beggs published in 2011"


Journal ArticleDOI
15 Nov 2011-PLOS ONE
TL;DR: Extending transfer entropy to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons, and these extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neuron.
Abstract: Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons.

211 citations


Journal ArticleDOI
30 Sep 2011-Chaos
TL;DR: This work utilizes the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information, and proposes a new set of filters that more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.
Abstract: Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.

36 citations


Posted Content
TL;DR: The information theory behind each information measure is reviewed, as well as the differences between these measures are examined by applying them to several simple model systems.
Abstract: Information theory is widely accepted as a powerful tool for analyzing complex systems and it has been applied in many disciplines. Recently, some central components of information theory - multivariate information measures - have found expanded use in the study of several phenomena. These information measures differ in subtle yet significant ways. Here, we will review the information theory behind each measure, as well as examine the differences between these measures by applying them to several simple model systems. In addition to these systems, we will illustrate the usefulness of the information measures by analyzing neural spiking data from a dissociated culture through early stages of its development. We hope that this work will aid other researchers as they seek the best multivariate information measure for their specific research goals and system. Finally, we have made software available online which allows the user to calculate all of the information measures discussed within this paper.

21 citations


Journal ArticleDOI
TL;DR: The results suggest that VPA could improve the tolerance for the stress by changing the expression of synapse-related proteins, as well as indicating a significant interaction between drug treatment and LH pretreatment in most subregions of the hippocampus.

8 citations


Posted Content
28 Nov 2011
TL;DR: A recently introduced measure, the partial information decomposition, is found to provide the most complete description of the interactions present in the logic gates under examination, and it is found that logic gates which possess higher levels of synergy require more time for a back-propagation network to learn.
Abstract: Information theory is widely accepted as a powerful tool for analyzing complex systems and it has been applied in many disciplines. Recently, some central components of information theory, multivariate information measures, have found expanded use in the study of several phenomena. Despite this widespread use, there is disagreement regarding the interpretation and use of these information measures. Due to the broad use of multivariate information measures, this problem prevents progress in many areas of study. Here, we seek to bring clarity to the situation by comparing the results from many proposed multivariate information measures for a simple system: Boolean logic gates. These logic gates represent the building blocks of computation and are well known across many disciplines. We find that a recently introduced measure, the partial information decomposition, provides the most complete description of the interactions present in the logic gates under examination. In addition, we apply the multivariate information measures to a dynamic system: a back-propagation network designed to learn the logic gates. Using the partial information decomposition, we find that logic gates which possess higher levels of synergy require more time for a back-propagation network to learn. Conversely, we find that logic gates which possess higher level of redundancy require less time for a back-propagation network to learn. This relationship was obscured when using the previously proposed information measures. Finally, we have made software available online which allows the user to calculate all of the information measures discussed within this paper, as well as software that can be used to create the back-propagation networks discussed herein.

3 citations