scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Harnessing behavioral diversity to understand neural computations for cognition.

01 Oct 2019-Current Opinion in Neurobiology (Elsevier Current Trends)-Vol. 58, pp 229-238
TL;DR: It is argued that neural data should be recorded during rich behavioral tasks, to model cognitive processes and estimate latent behavioral variables and provide a more complete picture of how movements shape neural dynamics and reflect changes in brain state, such as arousal or stress.
About: This article is published in Current Opinion in Neurobiology.The article was published on 2019-10-01 and is currently open access. It has received 45 citations till now. The article focuses on the topics: Artificial neural network.
Citations
More filters
Posted Content
TL;DR: In this paper, the authors describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements, and highlight insights obtained from largescale neural recordings in diverse model systems.
Abstract: Neuroscientists today can measure activity from more neurons than ever before, and are facing the challenge of connecting these brain-wide neural recordings to computation and behavior Here, we first describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements We next highlight insights obtained from large-scale neural recordings in diverse model systems, and argue that some of these pose a challenge to traditional theoretical frameworks Finally, we elaborate on existing modelling frameworks to interpret these data, and argue that interpreting brain-wide neural recordings calls for new theoretical approaches that may depend on the desired level of understanding at stake These advances in both neural recordings and theory development will pave the way for critical advances in our understanding of the brain

84 citations

Journal ArticleDOI
TL;DR: In this article , the authors describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements, and elaborate on existing modeling frameworks to interpret these data and argue that the interpretation of brain-wide neural recordings calls for new theoretical approaches that may depend on the desired level of understanding.
Abstract: Neuroscientists today can measure activity from more neurons than ever before, and are facing the challenge of connecting these brain-wide neural recordings to computation and behavior. In the present review, we first describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements. We next highlight insights obtained from large-scale neural recordings in diverse model systems, and argue that some of these pose a challenge to traditional theoretical frameworks. Finally, we elaborate on existing modeling frameworks to interpret these data, and argue that the interpretation of brain-wide neural recordings calls for new theoretical approaches that may depend on the desired level of understanding. These advances in both neural recordings and theory development will pave the way for critical advances in our understanding of the brain.

64 citations

Journal ArticleDOI
TL;DR: In this paper, the authors discuss recent advancements in four areas in which the relationship between neuroscience and AI has led to major advancements in the field; (1) AI models of working memory, (2) AI visual processing, (3) AI analysis of big neuroscience datasets, and (4) computational psychiatry.

36 citations

Journal ArticleDOI
TL;DR: It is proposed that spatial navigation is an excellent area in which these two disciplines can converge to help advance what the authors know about the brain and is highlighted by highlighting promising lines of research in which spatial navigation can be the point of intersection between neuroscience and AI.
Abstract: Recent advances in artificial intelligence (AI) and neuroscience are impressive. In AI, this includes the development of computer programs that can beat a grandmaster at GO or outperform human radiologists at cancer detection. A great deal of these technological developments are directly related to progress in artificial neural networks-initially inspired by our knowledge about how the brain carries out computation. In parallel, neuroscience has also experienced significant advances in understanding the brain. For example, in the field of spatial navigation, knowledge about the mechanisms and brain regions involved in neural computations of cognitive maps-an internal representation of space-recently received the Nobel Prize in medicine. Much of the recent progress in neuroscience has partly been due to the development of technology used to record from very large populations of neurons in multiple regions of the brain with exquisite temporal and spatial resolution in behaving animals. With the advent of the vast quantities of data that these techniques allow us to collect there has been an increased interest in the intersection between AI and neuroscience, many of these intersections involve using AI as a novel tool to explore and analyze these large data sets. However, given the common initial motivation point-to understand the brain-these disciplines could be more strongly linked. Currently much of this potential synergy is not being realized. We propose that spatial navigation is an excellent area in which these two disciplines can converge to help advance what we know about the brain. In this review, we first summarize progress in the neuroscience of spatial navigation and reinforcement learning. We then turn our attention to discuss how spatial navigation has been modeled using descriptive, mechanistic, and normative approaches and the use of AI in such models. Next, we discuss how AI can advance neuroscience, how neuroscience can advance AI, and the limitations of these approaches. We finally conclude by highlighting promising lines of research in which spatial navigation can be the point of intersection between neuroscience and AI and how this can contribute to the advancement of the understanding of intelligent behavior.

32 citations


Cites methods from "Harnessing behavioral diversity to ..."

  • ...Besides using ML as an analytical tool, there are attempts to go further and use artificial neural networks as a model to understand brain function (Musall et al., 2019; Richards et al., 2019)....

    [...]

Journal ArticleDOI
TL;DR: This work presents a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way and applies it to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold.
Abstract: Many cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.

28 citations

References
More filters
Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Harnessing behavioral diversity to ..." refers background in this paper

  • ...The incorporation of reinforcement learning to flexibly train ANNs might be a way to overcome this limitation and allow ANNs to uncover variable task contingencies on their own [100,103,104]....

    [...]

Journal ArticleDOI
25 Jun 2004-Science
TL;DR: Recent findings indicate that network oscillations bias input selection, temporally link neurons into assemblies, and facilitate synaptic plasticity, mechanisms that cooperatively support temporal representation and long-term consolidation of information.
Abstract: Clocks tick, bridges and skyscrapers vibrate, neuronal networks oscillate. Are neuronal oscillations an inevitable by-product, similar to bridge vibrations, or an essential part of the brain’s design? Mammalian cortical neurons form behavior-dependent oscillating networks of various sizes, which span five orders of magnitude in frequency. These oscillations are phylogenetically preserved, suggesting that they are functionally relevant. Recent findings indicate that network oscillations bias input selection, temporally link neurons into assemblies, and facilitate synaptic plasticity, mechanisms that cooperatively support temporal representation and long-term consolidation of information.

5,512 citations

Journal ArticleDOI
TL;DR: This work focuses on simple decisions that can be studied in the laboratory but emphasize general principles likely to extend to other settings, including deliberation and commitment.
Abstract: The study of decision making spans such varied fields as neuroscience, psychology, economics, statistics, political science, and computer science. Despite this diversity of applications, most decisions share common elements including deliberation and commitment. Here we evaluate recent progress in understanding how these basic elements of decision formation are implemented in the brain. We focus on simple decisions that can be studied in the laboratory but emphasize general principles likely to extend to other settings.

3,298 citations


"Harnessing behavioral diversity to ..." refers background in this paper

  • ...Accumulated evidence, bias, value, confidence Both (biological: [22] ANN: [21])...

    [...]

  • ...For instance, in models of evidence accumulation, reaction times provide an estimate of the time for a decision variable to reach a bound [22,36] and post-choice waiting times provide an estimate of decision confidence [37]....

    [...]

Journal ArticleDOI
01 Jan 1952

2,959 citations

Book
01 Oct 1998

2,830 citations

Trending Questions (1)
What are artificial neural networks?

Artificial neural networks (ANNs) could serve as artificial model organisms to connect neural dynamics and rich behavioral data.