scispace - formally typeset
Search or ask a question

Showing papers by "John M. Beggs published in 2022"


BookDOI
30 Aug 2022
TL;DR: An introduction to the critical point and its relevance to the brain is offered, and John Beggs considers future directions for the field, including research on homeostatic regulation, quasicriticality, and the expansion of the cortex and intelligence.
Abstract: How the cerebral cortex operates near a critical phase transition point for optimum performance. Individual neurons have limited computational powers, but when they work together, it is almost like magic. Firing synchronously and then breaking off to improvise by themselves, they can be paradoxically both independent and interdependent. This happens near the critical point: when neurons are poised between a phase where activity is damped and a phase where it is amplified, where information processing is optimized, and complex emergent activity patterns arise. The claim that neurons in the cortex work best when they operate near the critical point is known as the criticality hypothesis. In this book John Beggs—one of the pioneers of this hypothesis—offers an introduction to the critical point and its relevance to the brain. Drawing on recent experimental evidence, Beggs first explains the main ideas underlying the criticality hypotheses and emergent phenomena. He then discusses the critical point and its two main consequences—first, scale-free properties that confer optimum information processing; and second, universality, or the idea that complex emergent phenomena, like that seen near the critical point, can be explained by relatively simple models that are applicable across species and scale. Finally, Beggs considers future directions for the field, including research on homeostatic regulation, quasicriticality, and the expansion of the cortex and intelligence. An appendix provides technical material; many chapters include exercises that use freely available code and data sets.

13 citations


Journal ArticleDOI
01 Jul 2022-Entropy
TL;DR: This review provides an accessible introduction to the partial information decomposition (PID) framework, which reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources and focuses particularly on the synergistic mode, which quantifies the “higher-order” information carried in the patterns of multiple inputs and is not reducible to input from any single source.
Abstract: The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the “higher-order” information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure–function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.

4 citations


Journal ArticleDOI
TL;DR:
Abstract: The hypothesis that living neural networks operate near a critical phase transition point has received substantial discussion. This “criticality hypothesis” is potentially important because experiments and theory show that optimal information processing and health are associated with operating near the critical point. Despite the promise of this idea, there have been several objections to it. While earlier objections have been addressed already, the more recent critiques of Touboul and Destexhe have not yet been fully met. The purpose of this paper is to describe their objections and offer responses. Their first objection is that the well-known Brunel model for cortical networks does not display a peak in mutual information near its phase transition, in apparent contradiction to the criticality hypothesis. In response I show that it does have such a peak near the phase transition point, provided it is not strongly driven by random inputs. Their second objection is that even simple models like a coin flip can satisfy multiple criteria of criticality. This suggests that the emergent criticality claimed to exist in cortical networks is just the consequence of a random walk put through a threshold. In response I show that while such processes can produce many signatures criticality, these signatures (1) do not emerge from collective interactions, (2) do not support information processing, and (3) do not have long-range temporal correlations. Because experiments show these three features are consistently present in living neural networks, such random walk models are inadequate. Nevertheless, I conclude that these objections have been valuable for refining research questions and should always be welcomed as a part of the scientific process.

2 citations


Journal ArticleDOI
TL;DR: Analyzing the Cambridge Centre for Ageing and Neuroscience’s resting-state magnetoencephalography dataset in light of the quasicriticality framework suggests a link between changes to brain connectivity due to ageing, and increased vulnerability to distraction from irrelevant information.
Abstract: Aging impacts the brain's structural and functional organization and over time leads to various disorders, such as Alzheimer's disease and cognitive impairment. The process also impacts sensory function, bringing about a general slowing in various perceptual and cognitive functions. Here, we analyze the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) resting-state magnetoencephalography (MEG) dataset—the largest aging cohort available—in light of the quasicriticality framework, a novel organizing principle for brain functionality which relates information processing and scaling properties of brain activity to brain connectivity and stimulus. Examination of the data using this framework reveals interesting correlations with age and gender of test subjects. Using simulated data as verification, our results suggest a link between changes to brain connectivity due to aging and increased dynamical fluctuations of neuronal firing rates. Our findings suggest a platform to develop biomarkers of neurological health.