scispace - formally typeset
Open AccessJournal ArticleDOI

Separate neural representations of prediction error valence and surprise: Evidence from an fMRI meta-analysis.

Reads0
Chats0
TLDR
A meta‐analysis of fMRI studies investigating the neural basis of RPE points to a sequential and distributed encoding of different components of the RPE signal, with potentially distinct functional roles.
Abstract
Learning occurs when an outcome differs from expectations, generating a reward prediction error signal (RPE). The RPE signal has been hypothesized to simultaneously embody the valence of an outcome (better or worse than expected) and its surprise (how far from expectations). Nonetheless, growing evidence suggests that separate representations of the two RPE components exist in the human brain. Meta-analyses provide an opportunity to test this hypothesis and directly probe the extent to which the valence and surprise of the error signal are encoded in separate or overlapping networks. We carried out several meta-analyses on a large set of fMRI studies investigating the neural basis of RPE, locked at decision outcome. We identified two valence learning systems by pooling studies searching for differential neural activity in response to categorical positive-versus-negative outcomes. The first valence network (negative > positive) involved areas regulating alertness and switching behaviours such as the midcingulate cortex, the thalamus and the dorsolateral prefrontal cortex whereas the second valence network (positive > negative) encompassed regions of the human reward circuitry such as the ventral striatum and the ventromedial prefrontal cortex. We also found evidence of a largely distinct surprise-encoding network including the anterior cingulate cortex, anterior insula and dorsal striatum. Together with recent animal and electrophysiological evidence this meta-analysis points to a sequential and distributed encoding of different components of the RPE signal, with potentially distinct functional roles.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Deeply Felt Affect: The Emergence of Valence in Deep Active Inference

TL;DR: This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition and characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action.
Journal ArticleDOI

A neural network for information seeking

TL;DR: The authors show that a network including primate anterior cingulate cortex and basal ganglia encodes opportunities to gain information about uncertain rewards and mediates information seeking and demonstrates a cortico-basal ganglia mechanism responsible for motivating actions to resolve uncertainty by seeking knowledge about the future.
Journal ArticleDOI

Human VMPFC encodes early signatures of confidence in perceptual decisions.

TL;DR: The results suggest that the VMPFC holds an early confidence representation arising from decision dynamics, preceding and potentially informing metacognitive evaluation.
Journal ArticleDOI

An integrative way for studying neural basis of basic emotions with fMRI

TL;DR: This model argues that basic emotions are not contrary to the dimensional studies of emotions (core affects), and proposes that basic emotion should locate on the axis in the dimensions of emotion, and only represent one typical core affect (arousal or valence).
Journal ArticleDOI

Is there a prediction network? Meta-analytic evidence for a cortical-subcortical network likely subserving prediction.

TL;DR: A widely distributed brain network encompassing regions within the inferior and middle frontal gyri, anterior insula, premotor cortex, pre-supplementary motor area, temporoparietal junction, striatum, thalamus/subthalamus and the cerebellum is revealed and its relevance to motor control, attention, implicit learning and social cognition is discussed.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Journal ArticleDOI

A Neural Substrate of Prediction and Reward

TL;DR: Findings in this work indicate that dopaminergic neurons in the primate whose fluctuating output apparently signals changes or errors in the predictions of future salient and rewarding events can be understood through quantitative theories of adaptive optimizing control.
Journal ArticleDOI

FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data

TL;DR: FieldTrip is an open source software package that is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data.
Posted Content

Reinforcement Learning: A Survey

TL;DR: A survey of reinforcement learning from a computer science perspective can be found in this article, where the authors discuss the central issues of RL, including trading off exploration and exploitation, establishing the foundations of RL via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Related Papers (5)