scispace - formally typeset
Open AccessJournal Article

Gradual extinction prevents the return of fear: implications for the discovery of state

Reads0
Chats0
TLDR
This paper showed that gradually reducing the frequency of aversive stimuli, rather than eliminating them abruptly, prevents the recovery of fear, which has important implications for theories of state discovery in reinforcement learning.
Abstract
Fear memories are notoriously difficult to erase, often recovering over time. The longstanding explanation for this finding is that, in extinction training, a new memory is formed that competes with the old one for expression but does not otherwise modify it. This explanation is at odds with traditional models of learning such as Rescorla-Wagner and reinforcement learning. A possible reconciliation that was recently suggested is that extinction training leads to the inference of a new state that is different from the state that was in effect in the original training. This solution, however, raises a new question: under what conditions are new states, or new memories formed? Theoretical accounts implicate persistent large prediction errors in this process. As a test of this idea, we reasoned that careful design of the reinforcement schedule during extinction training could reduce these prediction errors enough to prevent the formation of a new memory, while still decreasing reinforcement sufficiently to drive modification of the old fear memory. In two Pavlovian fear-conditioning experiments, we show that gradually reducing the frequency of aversive stimuli, rather than eliminating them abruptly, prevents the recovery of fear. This finding has important implications for theories of state discovery in reinforcement learning.

read more

Citations
More filters
Journal ArticleDOI

Computational psychiatry as a bridge from neuroscience to clinical applications

TL;DR: This work reviews recent advances in data driven and theory driven Computational psychiatry, with an emphasis on clinical applications, and highlights the utility of combining them.
Journal ArticleDOI

Learning task-state representations

TL;DR: Recent research into the computational and neural underpinnings of ‘representation learning’—how humans (and other animals) construct task representations that allow efficient learning and decision-making are summarized.
Journal ArticleDOI

Behavioral and neurobiological mechanisms of pavlovian and instrumental extinction learning.

TL;DR: A review of the behavioral neuroscience of extinction can be found in this article, where a behavior that has been acquired through Pavlovian or instrumental learning decreases in strength when the outcome that reinforced it is removed.
Journal ArticleDOI

Discovering latent causes in reinforcement learning

TL;DR: The principles of latent causal inference may provide a general theory of structure learning across cognitive domains, and are reviewed with a focus on Pavlovian conditioning.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Journal ArticleDOI

A model for Pavlovian learning: Variations in the effectiveness of conditioned but not of unconditioned stimuli.

TL;DR: A new model is proposed that deals with the explanation of cases in which learning does not occur in spite of the fact that the conditioned stimulus is a signal for the reinforcer by specifying that certain procedures cause a conditioned stimulus to lose effectiveness.
Journal ArticleDOI

Fear memories require protein synthesis in the amygdala for reconsolidation after retrieval

TL;DR: It is shown that consolidated fear memories, when reactivated during retrieval, return to a labile state in which infusion of anisomycin shortly after memory reactivation produces amnesia on later tests, regardless of whether reactivation was performed 1 or 14 days after conditioning.
Journal ArticleDOI

Context and Behavioral Processes in Extinction.

TL;DR: Evidence that extinction does not destroy the original learning, but instead generates new learning that is especially context-dependent is reviewed, consistent with behavioral models that emphasize the role of generalization decrement and expectation violation.
Journal ArticleDOI

Time, rate, and conditioning.

TL;DR: The authors draw together and develop previous timing models for a broad range of conditioning phenomena to reveal their common conceptual foundations: first, conditioning depends on the learning of the temporal intervals between events and the reciprocals of these intervals, the rates of event occurrence.
Related Papers (5)