scispace - formally typeset
Open AccessJournal ArticleDOI

The Importance of Falsification in Computational Cognitive Modeling.

Reads0
Chats0
TLDR
It is argued here that the simulation of candidate models is necessary to falsify models and therefore support the specific claims about cognitive function made by the vast majority of model-based studies.
About
This article is published in Trends in Cognitive Sciences.The article was published on 2017-06-01 and is currently open access. It has received 293 citations till now. The article focuses on the topics: Cognitive model.

read more

Citations
More filters
Journal ArticleDOI

Ten simple rules for the computational modeling of behavioral data.

TL;DR: Ten simple rules to ensure that computational modeling is used with care and yields meaningful insights are offered, which apply to the simplest modeling techniques most accessible to beginning modelers and most rules apply to more advanced techniques.
Journal ArticleDOI

Generalization guides human exploration in vast decision spaces

TL;DR: Modelling how humans search for rewards under limited search horizons finds evidence that Gaussian process function learning—combined with an optimistic upper confidence bound sampling strategy—provides a robust account of how people use generalization to guide search.
Journal ArticleDOI

Lack of theory building and testing impedes progress in the factor and network literature

TL;DR: The applied social science literature using factor and network models continues to grow rapidly as mentioned in this paper, and most work reads like an exercise in model fitting, and falls short of theory building and testing in social science.
Posted Content

Contextual modulation of value signals in reward and punishment learning

TL;DR: It is demonstrated, using computational modelling and fMRI in humans, that learning option values in a relative—context-dependent—scale offers a simple computational solution for avoidance learning.
Journal ArticleDOI

Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing.

TL;DR: It appears that people tend to preferentially take into account information that confirms their current choice, relative to positive ones, when considering valence-induced bias in the context of both factual and counterfactual learning.
References
More filters
Journal ArticleDOI

Technical Note : \cal Q -Learning

TL;DR: This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989), showing that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action- values are represented discretely.
Journal ArticleDOI

The free-energy principle: a unified brain theory?

TL;DR: This Review looks at some key brain theories in the biological and physical sciences from the free-energy perspective, suggesting that several global brain theories might be unified within a free- energy framework.
Journal ArticleDOI

Technical Note Q-Learning

TL;DR: In this article, it is shown that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action values are represented discretely.
Journal Article

The Logic of Scientific Discovery

Karl Popper
- 01 Jan 1959 - 
TL;DR: The Open Society and Its Enemies as mentioned in this paper is regarded as one of Popper's most enduring books and contains insights and arguments that demand to be read to this day, as well as many of the ideas in the book.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What are the contributions mentioned in the paper "Computational cognitive neuroscience: model fitting should not replace model simulation" ?

Here the authors argue that the analysis of model simulations is often necessary to support the specific claims about behavioral function that most of model-based studies make. The authors defend this argument both informally by providing a large-scale ( N > 300 ) review of recent studies, and formally by showing how model simulations are necessary to interpret model comparison results. Finally, the authors propose guidelines for future work, which combine model comparison and simulation. 

4. Fit the competing computational models to the data, in order to obtain, for each model an estimation of the best fitting model parameters and an approximation of the model evidence, that trades-off the quality of fit and model complexity. 

Relative model comparison criteria (i.e. various approximations of the model evidence, such as BIC, AIC,) are not appropriate to falsify models because they do not capture certain features of the fitted data: 1) they focus on the evidence in favor of the best, instead of evidence against the rival model, and 2) they are blind to the capacity of tested models to reproduce (or not) any particular phenomenon of interest. 

The learning curves represent data simulated from this task with a standard RL algorithm (in grey; “No modulation” case) and a model that uses a higher learning rate in the volatile compared to the stable phase (in blue; the “Modulation” case). 

In natural sciences, it has been proposed that the epistemological specificity of a computational modeling approach, compared to a model-free one, is that, the latter investigates directly the natural phenomenon of interest, whereas the former builds an artificial representation of the natural system (model) and study its behavior22. 

Simulate ex ante the two (or more) competing computational theories across a large range of parameters (sometimes called a ‘parameter recovery’ procedure) in order to ensure that the task allows the discrimination of the two models (i.e. their model predictions diverge in front of a key experimental manipulation). 

In short, this precept dictates that amongst “equally good” explanation of data, the less complex should be held as more likely to be true. 

In cognitive neuroscience computational models can also be simply used as tools to quantify different features of the behavioral or neural activity. 

Such procedure (that can be defined “model recovery”) would consist in simulating two datasets with two different models and verify (for a given set of models and task specification) which relative model comparison criterion avoid both over- and under-fitting17.