scispace - formally typeset
Open AccessJournal ArticleDOI

Robust Standards in Cognitive Science

Reads0
Chats0
TLDR
In this paper, a fine-grained view of Open Science practices in cognitive modelling is presented, and the feasibility and usefulness of pre-registration and lab notebooks for each of these categories is discussed.
Abstract
Recent discussions within the mathematical psychology community have focused on how Open Science practices may apply to cognitive modelling. Lee et al. (2019) sketched an initial approach for adapting Open Science practices that have been developed for experimental psychology research to the unique needs of cognitive modelling. While we welcome the general proposal of Lee et al. (2019), we believe a more fine-grained view is necessary to accommodate the adoption of Open Science practices in the diverse areas of cognitive modelling. Firstly, we suggest a categorization for the diverse types of cognitive modelling, which we argue will allow researchers to more clearly adapt Open Science practices to different types of cognitive modelling. Secondly, we consider the feasibility and usefulness of preregistration and lab notebooks for each of these categories and address potential objections to preregistration in cognitive modelling. Finally, we separate several cognitive modelling concepts that we believe Lee et al. (2019) conflated, which should allow for greater consistency and transparency in the modelling process. At a general level, we propose a framework that emphasizes local consistency in approaches while allowing for global diversity in modelling practices.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Why Hypothesis Testers Should Spend Less Time Testing Hypotheses.

TL;DR: How shifting the focus to nonconfirmatory research can tie together many loose ends of psychology’s reform movement and help us to develop strong, testable theories, as Paul Meehl urged is discussed.
Journal ArticleDOI

An R package for an integrated evaluation of statistical approaches to cancer incidence projection

TL;DR: An R package allowing a straightforward comparison of cancer incidence rate projection approaches is developed, supporting in particular Bayesian models fitted by Integrated Nested Laplace Approximations (INLA), and use is demonstrated by an extensive empirical evaluation of operating characteristics of potentially applicable models differing by complexity.
Journal ArticleDOI

Another's pain in my brain: No evidence that placebo analgesia affects the sensory-discriminative component in empathy for pain

TL;DR: A more rigorous test aiming to overcome limitations of previous work finds no causal evidence for the engagement of somatosensory sharing in empathy, while replicating previous studies showing overlapping brain activity in the affective-motivational component for first-hand and empathy for pain.
Posted ContentDOI

Preregistration in diverse contexts: a preregistration template for the application of cognitive models.

TL;DR: Open science practices have become increasingly popular in psychology and related sciences as discussed by the authors, and these practices aim to increase rigour and transparency in science as a potential respon- ture.
Journal ArticleDOI

Think fast! The implications of emphasizing urgency in decision-making.

TL;DR: In this article, the authors provide a more conclusive answer to the implications of emphasizing urgent responding, providing a re-analysis of 6 data sets from previous studies using two different EAMs -the diffusion model and the linear ballistic accumulator (LBA) -with state-of-the-art methods for model selection based inference.
References
More filters
Journal ArticleDOI

A new look at the statistical model identification

TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Journal ArticleDOI

Estimating the Dimension of a Model

TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.

Estimating the dimension of a model

TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.
Journal ArticleDOI

Bayesian measures of model complexity and fit

TL;DR: In this paper, the authors consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined and derive a measure pD for the effective number in a model as the difference between the posterior mean of the deviances and the deviance at the posterior means of the parameters of interest, which is related to other information criteria and has an approximate decision theoretic justification.
Journal ArticleDOI

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 - 
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Related Papers (5)