scispace - formally typeset
Open AccessJournal ArticleDOI

Learning to represent reward structure: a key to adapting to complex environments.

Reads0
Chats0
TLDR
This work proposes a new hypothesis - the dopamine reward structural learning hypothesis - in which dopamine activity encodes multiplex signals for learning in order to represent reward structure in the internal state, leading to better reward prediction.
About
This article is published in Neuroscience Research.The article was published on 2012-12-01 and is currently open access. It has received 30 citations till now. The article focuses on the topics: Reinforcement learning.

read more

Citations
More filters
Journal ArticleDOI

Prefrontal cortex as a meta-reinforcement learning system

TL;DR: A new theory is presented showing how learning to learn may arise from interactions between prefrontal cortex and the dopamine system, providing a fresh foundation for future research.
Journal ArticleDOI

Internally generated sequences in learning and executing goal-directed behavior

TL;DR: Using computational modeling, it is proposed that internally generated sequences may be productively considered a component of goal-directed decision systems, implementing a sampling-based inference engine that optimizes goal acquisition at multiple timescales of on-line choice, action control, and learning.
Journal ArticleDOI

Prediction error associated with the perceptual segmentation of naturalistic events

TL;DR: At points of unpredictability, midbrain and striatal regions associated with the phasic release of the neurotransmitter dopamine transiently increased in activity, which could provide a global updating signal, cuing other brain systems that a significant new event has begun.
Journal ArticleDOI

Reward feedback accelerates motor learning

TL;DR: The use of reward feedback is a promising approach to either supplement or substitute sensory feedback in the development of improved neurorehabilitation techniques and points to an important role played by reward in the motor learning process.
Journal ArticleDOI

Rethinking dopamine as generalized prediction error.

TL;DR: A new theory of dopamine function is developed that embraces a broader conceptualization of prediction errors and indicates that by signalling errors in both sensory and reward predictions, dopamine supports a form of RL that lies between model-based and model-free algorithms.
References
More filters

Adaptive Critics and the Basal Ganglia

TL;DR: One consequence of the embedded agent view is the increasing interest in the learning paradigm called reinforcement learning (RL).
Journal ArticleDOI

Midbrain dopamine neurons encode decisions for future action

TL;DR: It is concluded that immediate decisions are likely to be generated elsewhere and conveyed to the dopamine neurons, which play a role in shaping long-term decision policy through dynamic modulation of the efficacy of basal ganglia synapses.
Proceedings ArticleDOI

Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction

TL;DR: Results using Horde on a multi-sensored mobile robot to successfully learn goal-oriented behaviors and long-term predictions from off-policy experience are presented.
Journal Article

Neural mechanisms for foraging

TL;DR: It is demonstrated that humans can alternate between two modes of choice, comparative decision-making and foraging, depending on distinct neural mechanisms in ventromedial prefrontal cortex (vmPFC) and anterior cingulate cortex (ACC) using distinct reference frames.
Journal ArticleDOI

Dopamine: generalization and bonuses

TL;DR: This paper interprets an additional role for dopamine in terms of the mechanistic attentional and psychomotor effects of dopamine, having the computational role of guiding exploration.
Related Papers (5)