scispace - formally typeset
Search or ask a question
Author

Giulio Tononi

Bio: Giulio Tononi is an academic researcher from University of Wisconsin-Madison. The author has contributed to research in topics: Non-rapid eye movement sleep & Sleep in non-human animals. The author has an hindex of 114, co-authored 511 publications receiving 58519 citations. Previous affiliations of Giulio Tononi include University of Pisa & University of Nebraska Medical Center.


Papers
More filters
Journal ArticleDOI
TL;DR: A research strategy to achieve the connection matrix of the human brain (the human “connectome”) is proposed, and its potential impact is discussed.
Abstract: The connection matrix of the human brain (the human “connectome”) represents an indispensable foundation for basic and applied neurobiological research. However, the network of anatomical connections linking the neuronal elements of the human brain is still largely unknown. While some databases or collations of large-scale anatomical connection patterns exist for other mammalian species, there is currently no connection matrix of the human brain, nor is there a coordinated research effort to collect, archive, and disseminate this important information. We propose a research strategy to achieve this goal, and discuss its potential impact.

2,908 citations

Journal ArticleDOI
TL;DR: This paper reviews a novel hypothesis about the functions of slow wave sleep-the synaptic homeostasis hypothesis, which accounts for a large number of experimental facts, makes several specific predictions, and has implications for both sleep and mood disorders.

1,864 citations

Journal ArticleDOI
01 Jul 2004-Nature
TL;DR: It is shown that sleep homeostasis indeed has a local component, which can be triggered by a learning task involving specific brain regions, and that the local increase in SWA after learning correlates with improved performance of the task after sleep.
Abstract: Human sleep is a global state whose functions remain unclear. During much of sleep, cortical neurons undergo slow oscillations in membrane potential, which appear in electroencephalograms as slow wave activity (SWA) of <4 Hz1. The amount of SWA is homeostatically regulated, increasing after wakefulness and returning to baseline during sleep2. It has been suggested that SWA homeostasis may reflect synaptic changes underlying a cellular need for sleep3. If this were so, inducing local synaptic changes should induce local SWA changes, and these should benefit neural function. Here we show that sleep homeostasis indeed has a local component, which can be triggered by a learning task involving specific brain regions. Furthermore, we show that the local increase in SWA after learning correlates with improved performance of the task after sleep. Thus, sleep homeostasis can be induced on a local level and can benefit performance.

1,658 citations

Journal ArticleDOI
08 Jan 2014-Neuron
TL;DR: This Perspective considers the rationale and evidence for the synaptic homeostasis hypothesis (SHY), and points to open issues related to sleep and plasticity.

1,565 citations

Journal ArticleDOI
TL;DR: A measure, called neural complexity (CN), that captures the interplay between functional segregation and functional integration in brains of higher vertebrates and may prove useful in analyzing complexity in other biological domains such as gene regulation and embryogenesis.
Abstract: In brains of higher vertebrates, the functional segregation of local areas that differ in their anatomy and physiology contrasts sharply with their global integration during perception and behavior. In this paper, we introduce a measure, called neural complexity (CN), that captures the interplay between these two fundamental aspects of brain organization. We express functional segregation within a neural system in terms of the relative statistical independence of small subsets of the system and functional integration in terms of significant deviations from independence of large subsets. CN is then obtained from estimates of the average deviation from statistical independence for subsets of increasing size. CN is shown to be high when functional segregation coexists with integration and to be low when the components of a system are either completely independent (segregated) or completely dependent (integrated). We apply this complexity measure in computer simulations of cortical areas to examine how some basic principles of neuroanatomical organization constrain brain dynamics. We show that the connectivity patterns of the cerebral cortex, such as a high density of connections, strong local connectivity organizing cells into neuronal groups, patchiness in the connectivity among neuronal groups, and prevalent reciprocal connections, are associated with high values of CN. The approach outlined here may prove useful in analyzing complexity in other biological domains such as gene regulation and embryogenesis.

1,504 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
TL;DR: Developments in this field are reviewed, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
Abstract: Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.

17,647 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: This article reviews studies investigating complex brain networks in diverse experimental modalities and provides an accessible introduction to the basic principles of graph theory and highlights the technical challenges and key questions to be addressed by future developments in this rapidly moving field.
Abstract: Recent developments in the quantitative analysis of complex networks, based largely on graph theory, have been rapidly translated to studies of brain network organization. The brain's structural and functional systems have features of complex networks--such as small-world topology, highly connected hubs and modularity--both at the whole-brain scale of human neuroimaging and at a cellular scale in non-human animals. In this article, we review studies investigating complex brain networks in diverse experimental modalities (including structural and functional MRI, diffusion tensor imaging, magnetoencephalography and electroencephalography in humans) and provide an accessible introduction to the basic principles of graph theory. We also highlight some of the technical challenges and key questions to be addressed by future developments in this rapidly moving field.

9,700 citations