scispace - formally typeset
Search or ask a question
Posted ContentDOI

Preregistration in diverse contexts: a preregistration template for the application of cognitive models.

TL;DR: Open science practices have become increasingly popular in psychology and related sciences as discussed by the authors, and these practices aim to increase rigour and transparency in science as a potential respon- ture.
Abstract: In recent years, open science practices have become increasingly popular in psychology and related sciences. These practices aim to increase rigour and transparency in science as a potential respon...

Summary (7 min read)

Cognitive Modelling

  • Cognitive modelling is the formal description of theories about psychological processes (Farrell & Lewandowsky, 2018) .
  • As it allows for cognitive models to provide unique insights into psychological processes in a range of different contexts, this diversity can also lead to some potential pitfalls.
  • Importantly, these issues are precisely what many open science practices -particularly preregistration -have been designed to address in purely experimental areas of psychology research.
  • Model application involves using cognitive models in a similar manner to statistical models (e.g., ANOVA), though the assessments are performed on the theoretically meaningful parameters estimated within the cognitive model, rather than the variables directly observed within the data, creating several degrees of freedom that are not present in purely experimental research using only statistical models.

Researcher Degrees of Freedom in Model Application

  • Previous research has suggested that preregistration templates can have meaningful differences on the quality of preregistrations.
  • In fact, there have been cognitive modelling preregistrations submitted to the OSF that mentioned no more than the general modelling approach 1 , which is understandable given that these preregistration templates were not designed for cognitive modelling studies.
  • In order to appropriately apply preregistration to research contexts that are not purely experimental, it is important to identify the unique researcher degrees of freedom within the area of research that the preregistration should ideally constrain.
  • Below, the authors identify several degrees of freedom that they believe are present within model application, and then provide a more detailed discussion of these degrees of freedom and why they are important.

Parameter Estimation

  • Deciding on the method of parameter estimation, also known as E1.
  • Specifying settings/priors for parameter estimation, also known as E2.
  • If the the data are going to be summarised into descriptive statistics, specifying which descriptive statistics will be used and how, also known as E3.

Statistical Inference

  • Choosing a method of statistical inference on parameters (e.g., when comparing conditions), also known as I1.
  • [This is largely covered by existing templates].
  • Specifying what parameters will be assessed (e.g., allowed to vary across experimental conditions), also known as I2.

Issues and Peculiarities in Preregistering Model Application

  • The concrete list of modeller's degrees of freedom above provides an indication of what factors should be constrained by an exhaustive preregistration in model application, beyond the general researcher degrees of freedom.
  • An effective preregistration of the model and the model parameterisation can constrain these potential researcher degrees of freedom, which would help ensure more rigorous model application work.
  • Therefore, their preregistration template integrates parts of the preregistration template for secondary data proposed by Weston et al. (2018) .
  • Specifically, if researchers were to only perform or report the robustness checks that were successful in showing their results to be robust, then readers would likely become overconfident in the results of the study, as the findings were robust against all reported robustness checks.
  • Therefore, although the authors agree that robustness checks are an important part of model application (and more generally, cognitive modelling research), they disagree that they are an alternative to preregistration, and instead are an aspect of the study that should be included within a preregistration document.

A Template for Preregistration in Model Application

  • Taking the previous considerations into account, the authors developed a template for preregistration in model application, which can either be used in the context of standard preregistration, or as a basis for the Registered Modelling Reports journal format suggested by Lee et al. (2019) , which builds on the conventional Registered Reports journal format (Chambers et al., 2014) .
  • It should also be noted that there is already at least one example of a modelling study using a detailed preregistration (Arnold, Heck, Bröder, Meiser, & Boywitt, 2019) , meaning that their template is not the only method for creating a highly constraining preregistration document in model application.
  • This study by Arnold et al. (2019) seems to be the exception to the rule: preregistration in model application appears to be quite rare, and most other preregistration documents in model application do not sufficiently constrain the types of modeller's degrees of freedom that the authors discuss above.
  • Note that to make their preregistration template as concrete as possible, each part of their preregistration template will be accompanied by an example related to their example application.
  • The details of their example application can be found in the next section.

Preregistration Template

  • Taking into account the modeller's degrees of freedom and the potential issues that the authors discussed earlier, they combined the preregistration template "OSF Prereg" 3 with parts of the secondary data analysis template (Weston et al., 2018) , and used this as a basis for their model application preregistration template.
  • Based on their discussion of the unique modeller's degrees of freedom in model application, their template also involved adapting, removing, and adding some sections.
  • The authors discuss the sections that they have added below, as it asks specific questions and hopefully encourages exhaustiveness.
  • Therefore, the additions proposed here prompt answers that are as specific as possible.
  • This differs from Evans & Brown (2017) , where the full diffusion model was estimated, i.e., including between-trial variability parameters for drift rate, starting point, and non-decision time.

A.3 More information

  • The architecture of the model should be pre-specified in a way that is specific, precise, and exhaustive.
  • To this end, you should ideally include a plate diagram and specify the relevant equations.
  • Bayesian hierarchical modelling for parameter estimation, the structure of the hierarchical model and the prior distribution over the parameters belong into this parameterisation as well.

B.2 Example

  • Only Bayesian hierarchical modeling will be used to estimate the parameters of the diffusion model, constraining individual-level parameters to follow group-level truncated normal distributions.
  • For the estimation model , the two groups (fixed-trial and fixed-time) are given a separate hierarchical structure, and the group-level parameters are not constrained between groups.
  • Following Evans & Brown (2017) and Evans et al. (2018) , the authors will use likelihood functions taken from the "fast-dm" toolbox (Voss & Voss, 2007) for the calculation of the density function of the simple diffusion model.
  • For the first model, for sampling from the posterior distributions over parameters, the authors will use Markov-chain Monte Carlo with differential evolution proposals (Turner et al., 2013) , using 66 chains, drawing 3,000 samples from each, and discarding the first 1,500 samples (as in Evans & Brown, 2017 , see supplementary materials).

C.2 Example

  • The key analysis will be replicated a) including participants/trials that were initially excluded in line with their exclusion criteria, and b) using a model in which the threshold parameter and the drift rate parameter vary across blocks.
  • Their results will be mentioned alongside the key results, and interpreted accordingly.
  • This section ensures that robustness checks are not performed and/or reported selectively.
  • Any post-hoc addition or modulation is then clearly exploratory rather than confirmatory research.
  • In the fully adapted and combined preregistration template , researchers are further asked to specify for example how they are going to do statistical inference on which parameters.

Example Application Background

  • The authors example application focuses on applying evidence accumulation models (also commonly referred to as sequential sampling models), which have been hugely useful and influential in the psychology literature (Evans & Wagenmakers, 2019; Forstmann, Ratcliff, & Wagenmakers, 2016) , and are thus an ideal focus for their discussions of preregistration in model application.
  • Evidence accumulation models describe the fundamental process of making a decision between alternatives in the presence of noise (e.g., Ratcliff, 1978; Ratcliff & Smith, 2004; Brown & Heathcote, 2008; Usher & McClelland, 2001) , where evidence accumulates for the different decision alternatives until the evidence for one reaches a threshold, and a decision is made.
  • The main finding of Evans & Brown (2017) was that with enough practice and feedback, people were able to optimise the speed-accuracy trade-off, and that they do so faster with increasing amounts of feedback.
  • Interestingly, they also found that participants who completed a fixed number of trials per block were closer to optimality than participants who completed trials for a fixed amount of time in each block, which is in conflict to previous research (Starns & Ratcliff, 2012) .
  • The assessment in Evans & Brown (2017) was rather qualitative and not very rigorously defined, making this a good opportunity to show how preregistration in cognitive modelling can add rigor and transparency in situations with many potential researcher degrees of freedom.

Example Application Sample & Materials

  • The authors used a subset of an existing dataset consisting of 70 participants who were recruited at the University of Newcastle and received course credit for their participation.
  • Note that power analyses are not currently possible for the application of complex cognitive models, and more generally, the concept of power is only applicable within a significance testing framework with meaningful cut-off points between an effect being present and not being present.
  • Instead, the authors planned to use Bayes factors for a more continuous, strength of evidence approach.
  • Applying these criteria resulted in the exclusion of 9 participants from the fixed-time condition (8 due to accuracy, 1 due to too few eligible trials), and 10 participants from the fixed-trial condition (all due to accuracy).

Example Application Preregistration

  • The authors example application was preregistered at https://osf.io/39t5x/.
  • The authors preregistered the following hypotheses: H1 With suitable practice and medium feedback (cf Evans & Brown, 2017) , participants get closer to optimality with each block of trials.
  • Using only the second half of all 20 blocks (11-20, so as to account for participants adjusting to the task), the authors will test whether each group, separately, differs from optimality using Bayes factors, approximated with the Savage-Dickey Ratio on µ c, also known as Testing H2.
  • RT stands for reaction time, resp stands for response accuracy.

Example Application Results

  • Figure 4 shows the posterior distributions of the decision threshold parameters against the posterior predictive distributions for the optimal threshold.
  • From this qualitative analysis, it is unclear whether the groups differ in the extent to which they move towards optimality.
  • Testing each group separately reveals strong evidence for the participants of both groups being too cautious in their decision making (BF T ime = 149, BF T rial = 31.664).
  • Using the Savage-Dickey method on ∆ c leads to weak evidence for the groups not differing in their distance from optimality (BF = 1.182), albeit in the direction of the fixed trial group being closer to optimality than the fixed time group.
  • Fixed-time participants completed an average of 25.66 trials per block, and fixedtrial participants took an average of 92.93 seconds per block.

Example Application Discussion

  • The authors results do not fully replicate Evans & Brown (2017) .
  • In their replication, participants did move towards optimality given practice and feedback, but there was no clear difference between the fixed-time and fixed-trial groups.
  • This may be a result of their updated statistical analysis methods, as their qualitative pattern of results look similar to those from Evans & Brown (2017) , and it is their new quantitative analyses that suggest that there is no evidence for a difference between groups.
  • Regardless, their findings indicate that there is not necessarily a difference between fixed-trial blocks and fixed-time blocks in how close people are able to come to reward rate optimality, and therefore, this perceived difference in Evans & Brown (2017) should be interpreted with caution.
  • It should also be noted that their findings showed strong evidence for participants being suboptimally cautious, which again is somewhat against the conclusions of Evans & Brown ( 2017), but makes sense in the context of Evans et al. (2018) who suggested that only specific experimental designs (e.g., slower trial-to-trial timing) will result in people achieving reward rate optimality.

General Discussion

  • The authors have proposed a concrete list of modeller's degrees of freedom, developed a preregistration template for model application, and showcased the possibility of preregistering model application studies with an example application.
  • The authors overarching goal was to display how preregistration templates can be developed within areas of psychology that have diverse hypotheses and complex analyses, such as cognitive modelling, with a more specific goal of making preregistration in model application more feasible.
  • It has previously been found that a format with specific, openended questions is better at restricting researcher degrees of freedom than a purely open ended template (Veldkamp et al., 2018) .

Limitations

  • First and foremost, it should be noted that their proposed preregistration template is an initial proposal of what a preregistration might look like in cognitive modelling, and specifically for the category of model application.
  • Instead, their aim is to create an initial tool for researchers who are interested in preregistering their model application study, but are unsure of how to do so.
  • Researchers more familiar with other classes of models may have different opinions on which degrees of freedom should be constrained within a model application preregistration document.
  • This would allow the researcher to evaluate the fit of the model in a constrained way based on a-priori defined criteria.

Future Directions

  • 2 Data Description for Pre-existing Data 2.1.
  • Name or brief description of dataset(s) Motion discrimination task with a random dot kinematogram in 70 participants.

2.2 Is this data open or publicly available?

  • The data are currently not openly available.
  • They were collected using JATOS and are stored on a University of Newcastle server.

6.1 Data exclusion

  • For all participants, the first block of trials will be excluded to allow for participants to become adequately practiced at the task.
  • Trials with response times below 150ms or above 10000ms will be excluded as anticipatory responses and trials where participants lost attention, respectively.
  • Participants with task accuracy below 60% or less than 200 eligible trials are excluded.
  • The number of eligible trials was decided after examining the data but not the dependent variable, based on the number of trials required for accurate parameter estimation.
  • The authors argue that participants performing below 70% may just be urgent, in which case their exclusion might bias their analyses, but those performing below 60 % would be too close to chance.

7.1 Choice of Cognitive Model

  • As in Evans et al. (2018) the parameters of a simple diffusion model will be estimated, namely only: drift rate (v), starting point (z), threshold (a), nondecision time (ter).
  • This differs from Evans & Brown (2017) , where the full diffusion model was estimated, i.e., including between-trial variability parameters for drift rate, starting point, and non-decision time.
  • These between-trial variability parameters were not relevant for Evans & Brown (2017) , and without them, the simple diffusion model has better parameter recovery results (Lerche & Voss, 2016) .
  • Figure 1 shows a plate diagram of the hierarchical structure used for the qualitative model-based analysis assessing 1) whether groups appear to get closer to optimality over time, 2) whether each group differs from optimality, and 3) whether there appears to be a difference between the groups (see Analysis Plan for more information); i indexes participants, and j indexes blocks.
  • Only the threshold parameter varies between blocks, to estimate changes in the speed accuracy trade-off. samples (to ensure greater precision for the more quantitatively precise Bayes factors comparison), with the first 1,500 discarded as burn-in.

8.1 Statistical models

  • The authors previously specified three hypotheses: H1 With suitable practice and medium feedback (cf Evans & Brown, 2017) , participants get closer to optimality with each block of trials.
  • H3 Participants who complete a fixed number of trials are closer to optimality than participants who complete trials in a fixed amount of time.
  • In order to test these hypotheses, the authors will first need to calculate mean response time and accuracy for each participant and both groups.
  • Here, PC is the probability of a correct response, MRT is the mean correct response time, ITI refers to the inter-trial interval, FDT refers to feedback display time, and ET refers to the error time-out.
  • The optimal threshold setting maximises reward rate given the estimated values of all other parameters, and is identified by calculating the expected accuracy and mean response time for each setting of the threshold parameter (Bogacz et al., 2006) .

8.3 Inference criteria

  • These are the inference criteria for the analyses in 8.1 and 8.2: 1. Criterion for BF testing H2: Following Jeffreys (1961) for the interpretation of strength of evidence given a Bayes Factor.
  • Criterion for BF testing H3: Following Jeffreys (1961) for the interpretation of strength of evidence given a Bayes Factor.
  • Criterion to test H1, qualitatively evaluating the plots comparing posterior to posterior predictive distribution:.
  • If the actual thresholds of both (or one) group(s) clearly show a trend towards optimality, the authors will conclude that participants move towards optimising the speed-accuracy trade-off.
  • Otherwise, their conclusion will be suitably less strong and discuss any lack of clarity.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Preregistration in Complex Contexts:
A Preregistration Template for the Application of Cognitive
Models
Sophia Cr¨uwell
1,2
and Nathan J. Evans
2,3
1
Meta-Research Innovation Center Berlin (METRIC-B), QUEST Center for
Transforming Biomedical Research, Berlin Institute of Health, Charit´e
Universit¨atsmedizin Berlin, Germany
2
Department of Psychology, University of Amsterdam, The Netherlands
3
School of Psychology, University of Newcastle, Australia
Word count: 8,283
Correspondence concerning this article may be addressed to: Sophia Cr¨uwell (sophia.cruewell@charite.de)

PREREGISTRATION FOR MODEL APPLICATION 2
Abstract
In recent years, open science practices have become increasingly popular in psychology
and related sciences. These practices aim to increase rigour and transparency in science as
a potential response to the challenges posed by the replication crisis. Many of these reforms
including the highly influential preregistration have been designed for experimental work
that tests simple hypotheses with standard statistical analyses, such as assessing whether
an experimental manipulation has an effect on a variable of interest. However, psychology
is a diverse field of research, and the somewhat narrow focus of the prevalent discussions
surrounding and templates for preregistration has led to debates on how appropriate these
reforms are for areas of research with more diverse hypotheses and more complex methods
of analysis, such as cognitive modelling research within mathematical psychology. Our
article attempts to bridge the gap between open science and mathematical psychology,
focusing on the type of cognitive modelling that Cr¨uwell, Stefan, & Evans (2019) labelled
model application, where researchers apply a cognitive model as a measurement tool to test
hypotheses about parameters of the cognitive model. Specifically, we (1) discuss several
potential researcher degrees of freedom within model application, (2) provide the first
preregistration template for model application, and (3) provide an example of a preregistered
model application using our preregistration template. More broadly, we hope that our
discussions and proposals constructively advance the debate surrounding preregistration in
cognitive modelling, and provide a guide for how preregistration templates may be developed
in other diverse or complex research contexts.
Keywords:
Cognitive modelling; Reproducibility; Open science; Preregistration; Transparency

PREREGISTRATION FOR MODEL APPLICATION 3
The replication crisis has been an issue for psychology and related fields since at least
2011 (Pashler & Wagenmakers, 2012), though many of the associated problems are likely
much older (cf. e.g., Sterling, 1959; Cohen, 1965; Meehl, 1967). These problems have led to
the proposal of a variety of reforms often termed open science practices which emphasise
rigour, specificity, the constraint of flexibility, and transparency. These practices include
data sharing (Klein et al., 2018), preregistration (Wagenmakers, Wetzels, Borsboom, van der
Maas, & Kievit, 2012), and the journal article format Registered Reports (Chambers et al.,
2014). The term open science is commonly used to refer to these practices as they encourage
openness in the sense of transparent and accessible research (Cr¨uwell et al., 2018), which in
combination with specificity and constraint are essential to counteract the effect of cognitive
biases and other pressures that may influence scientific findings (Munaf`o et al., 2017). The
current article focuses on the open science practice of preregistration, which we discuss in
more detail below, and how it might be implemented within cognitive modelling studies,
which we discuss in more detail in the following sections.
Preregistration involves the specification of a researcher’s plans for a study, including
hypotheses and analyses, typically before the study is conducted. This usually takes the form
of a document that contains these plans, which is made available online. Preregistration
can help constrain researcher degrees of freedom (i.e., undisclosed flexibility in study design,
data collection, and/or data analysis; Simmons, Nelson, & Simonsohn, 2011), and alleviate
the effects of questionable research practices (QRPs) such as hypothesising after results are
known (HARKing; Kerr, 1998) or p-hacking. This is important as each of these practices
can render the interpretations of results based on seemingly confirmatory analyses invalid
(Wagenmakers et al., 2012). Thus, while most published studies in psychology claim to
be confirmatory, it may be difficult to know whether these studies truly are confirmatory
without the a-priori specification of hypotheses and analysis plans, particularly given the
incredibly high incidence of findings falling in line with the “confirmatory” predictions of
psychology studies (Fanelli, 2010).

PREREGISTRATION FOR MODEL APPLICATION 4
Ideally, the constraint imposed by preregistration should clearly distinguish between
the exploratory and confirmatory steps within a research project (i.e., separate prediction
from “postdiction”; Wagenmakers et al., 2012). However, the fact that a study is
preregistered should not be taken as a marker of quality, as the preregistration document
may lack the specificity needed to effectively constrain potential researcher degrees of
freedom (Veldkamp et al., 2018), and the decisions made in the preregistration may not
be well justified or appropriate (Szollosi et al., 2019). The Registered Reports format
allows for an assessment of the quality of the pre-specified plan through an initial round
of peer-review before the study is conducted, meaning that researchers can alter their pre-
specified plans based on reviewer feedback (Chambers et al., 2014). However, this is not
the case for the standard practice of preregistration, and many psychology journals do not
currently include a Registered Report article format, meaning that researchers may initially
struggle to create preregistration documents that are appropriately detailed and justified
(Nosek, Ebersole, DeHaven, & Mellor, 2018). Several preregistration templates have
been developed to assist researchers in creating preregistration documents, such as those
provided by the Open Science Framework (OSF; https://osf.io/prereg/) and AsPredicted
(https://aspredicted.org/), as well as checklists to assess the quality and constraint of a
preregistration document (Wicherts et al., 2016). These templates and checklists have
been designed as general-purpose tools for experimental psychology. Therefore, they are
applicable to studies where researchers are interested in testing simple hypotheses, such as
whether an experimental manipulation has an effect on a variable of interest, with simple
analysis tools, such as a null hypothesis significance test on an interaction term within an
ANOVA.
A large proportion of psychology studies fall within the standard experimental
framework that these general-purpose templates and checklists have been designed to
accommodate, making these tools of broad use to many researchers in psychology. However,
psychology is a diverse field of research, and several areas of psychology commonly involve
studies with more diverse hypotheses and more complex methods of analysis. Importantly,

PREREGISTRATION FOR MODEL APPLICATION 5
the central focus of preregistration endeavours on purely experimental research has led
to debates on how appropriate preregistration is for psychological research that is not
purely experimental, particularly in the area of cognitive modelling, where researchers
use mathematical models that are formal representations of cognitive processes to better
understand human cognition (Wagenmakers & Evans, 2018; Lewandowsky, 2019; Lee
et al., 2019; Cr¨uwell et al., 2019; MacEachern & Van Zandt, 2019; Szollosi et al.,
2019; Vandekerckhove et al., 2019). Although some question the general usefulness of
preregistration in areas of psychology research with more diverse hypotheses and more
complex analyses (MacEachern & Van Zandt, 2019; Szollosi et al., 2019), others believe that
preregistration could still serve an important purpose in constraining researcher degrees of
freedom (Wagenmakers & Evans, 2018; Lee et al., 2019; Cr¨uwell et al., 2019). However, the
preregistration tools currently available to researchers may make achieving proper constraint
practically infeasible, as the exact researcher degrees of freedom in these areas of research can
differ greatly from those in purely experimental psychology (Wagenmakers & Evans 2018;
Lee et al. 2019; Cr¨uwell et al. 2019; Vandekerckhove et al. 2019; though also see Arnold
et al. 2019 for a cognitive modelling study with a well-constrained preregistration using
existing tools). Recent research has already begun to create more specific preregistration
templates for more specific areas of research, such as in qualitative research (Haven &
Grootel, 2019; Kern & Skrede Gleditsch, 2017), experience sampling methodology (Kirtley
et al., 2019), secondary data analysis (Mertens & Krypotos, 2019; Weston et al., 2018;
van den Akker et al., 2019), and fMRI studies (Flannery, 2018). Therefore, the further
development of method- and field-specific preregistration templates and checklists may
improve the applicability of preregistration to areas of psychology research with more diverse
hypotheses and more complex analyses, similar to how the development of general-purpose
preregistration templates and checklists have helped researchers to create well-constrained
preregistration documents for purely experimental studies.
Our article aims to bridge the gap between previous preregistration endeavours and
research in areas of psychology with diverse hypotheses and complex analyses. At a general

Citations
More filters
Journal ArticleDOI
TL;DR: It is concluded that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.
Abstract: Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of “researcher degrees of freedom” aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called “OSF Preregistration,” http://osf.io/prereg/). The Prereg Challenge format was a “structured” workflow with detailed instructions and an independent review to confirm completeness; the “Standard” format was “unstructured” with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the “structured” format restricted the opportunistic use of researcher degrees of freedom better (Cliff’s Delta = 0.49) than the “unstructured” format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.

44 citations

Journal ArticleDOI
TL;DR: Open science practices, such as registration of hypotheses and analytic plans before data collection and sharing analytic code and materials, can help to address research practices that may threaten the transparency, reproducibility, and replicability of research as discussed by the authors .
Abstract: Suicide claims more than 700,000 lives globally every year (World Health Organization, 2021) and affects approximately 135 people per individual who dies by suicide (Cerel et al., 2019). Those affected by suicide – from people with lived experience to policy-makers – are depending on researchers to provide reliable evidence: a prerequisite of effective prevention and treatment. However, not all evidence is equal; studies with small sample sizes may produce spurious results (Carpenter & Law, 2021) and measures may be unable to capture suicidal thoughts and behaviors in a reliable and valid way (Millner et al., 2020), which can compromise the generalizability of findings. The quality of the research methods used to generate evidence is the key to determining the credibility we afford it (Vazire et al., 2021). Although we have undoubtedly made progress over the years in our understanding of suicide, recent research does not appear to have built upon previous work to the extent it could have done – mostly because of major methodological limitations in suicide research and publication bias limiting insights into the full range of existing findings (Franklin et al., 2017; Pirkis, 2020). To build onwhat has come before us, we need to be able to see what we are building on. Beyond unpublished nullfindings, there are many other reasons the evidence base is incomplete. Journal word limits may preclude sufficiently detailed descriptions of methods and statistical analysis to enable replication, abandoned research questions and analysis plans may not be reported as they make for a messier story, or after a long period of data collection, the original hypotheses and analysis plans may have become hazy, or could have changed based on knowledge of the data. How can we strengthen the foundations of our evidence base for the future and in doing so, “future-proof” suicide research?We can take active steps to tackle the problematic research practices described earlier, which threaten transparency (openness about the research process), reproducibility (obtaining the same results again using the same data), and replicability (obtaining similar results with identical methods in new studies) of research. Open science practices, including registration of hypotheses and analytic plans before data collection (preregistration) and sharing analytic code and materials, can help to address research practices that may threaten the transparency, reproducibility, and replicability of research (Munafò et al., 2017). Conversations about transparency, reproducibility, and replicability have just begun to blossom in clinical psychology and psychiatry research (Tackett et al., 2017, 2019), and have only recently begun to open up formally in suicide research (Carpenter & Law, 2021). Following a proposal by the International Association for Suicide Prevention (IASP) Early Career Group, Crisis recently adopted the Registered Reports (RRs) article format (Pirkis, 2020); Carpenter and Law (2021) published an introduction to open science for suicide researchers; and the authors of the current editorial presented a symposium on open science practices at the 2021 IASP World Congress. In this editorial, we use examples from our and others’ work to demonstrate the opportunities for future-proofing research by implementing open science practices, and we discuss some of the challenges and their potential solutions. We cover implementing open science practices in new, ongoing, and concluded studies, and discuss practices in order of being “low” to “high” threshold to implement (based on Kathawalla et al., 2021). Space constraints preclude us from covering all open science

8 citations

Journal ArticleDOI
TL;DR: Pre-registration is a research practice where a protocol is deposited in a repository before a scientific project is performed as discussed by the authors , and the protocol may be publicly visible immediately upon deposition or it may remain hidden until the work is completed/published.
Abstract: Pre-registration is a research practice where a protocol is deposited in a repository before a scientific project is performed. The protocol may be publicly visible immediately upon deposition or it may remain hidden until the work is completed/published. It may include the analysis plan, outcomes, and/or information about how evaluation of performance (e.g. forecasting ability) will be made, Pre-registration aims to enhance the trust one can put on scientific work. Deviations from the original plan, may still often be desirable, but pre-registration makes them transparent. While pre-registration has been advocated and used to variable extent in diverse types of research, there has been relatively little attention given to the possibility of pre-registration for mathematical modeling studies. Feasibility of pre-registration depends on the type of modeling and the ability to pre-specify processes and outcomes. In some types of modeling, in particular those that involve forecasting or other outcomes that can be appraised in the future, trust in model performance would be enhanced through pre-registration. Pre-registration can also be seen as a component of a larger suite of research practices that aim to improve documentation, transparency, and sharing-eventually allowing better reproducibility of the research work. The current commentary discusses the evolving landscape of the concept of pre-registration as it relates to different mathematical modeling activities, the potential advantages and disadvantages, feasibility issues, and realistic goals.

8 citations

Journal ArticleDOI
TL;DR: Pre-registration as discussed by the authors is a technique that allows scientists to declare a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known.
Abstract: Flexibility in the design, analysis and interpretation of scientific studies creates a multiplicity of possible research outcomes. Scientists are granted considerable latitude to selectively use and report the hypotheses, variables and analyses that create the most positive, coherent and attractive story while suppressing those that are negative or inconvenient. This creates a risk of bias that can lead to scientists fooling themselves and fooling others. Preregistration involves declaring a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known. Preregistration (1) reduces the risk of bias by encouraging outcome-independent decision-making and (2) increases transparency, enabling others to assess the risk of bias and calibrate their confidence in research outcomes. In this Perspective, we briefly review the historical evolution of preregistration in medicine, psychology and other domains, clarify its pragmatic functions, discuss relevant meta-research, and provide recommendations for scientists and journal editors.

7 citations

Journal ArticleDOI
TL;DR: In this paper, insights from computational models and social neuroscience into motivations, precursors, and mechanisms of altruistic decision-making and other-regard are discussed, and theoretical and methodological tools for researchers who wish to adopt a multilevel, computational approach to study behaviors that promote others' welfare.
Abstract: This article discusses insights from computational models and social neuroscience into motivations, precursors, and mechanisms of altruistic decision-making and other-regard. We introduce theoretical and methodological tools for researchers who wish to adopt a multilevel, computational approach to study behaviors that promote others' welfare. Using examples from recent studies, we outline multiple mental and neural processes relevant to altruism. To this end, we integrate evidence from neuroimaging, psychology, economics, and formalized mathematical models. We introduce basic mechanisms-pertinent to a broad range of value-based decisions-and social emotions and cognitions commonly recruited when our decisions involve other people. Regarding the latter, we discuss how decomposing distinct facets of social processes can advance altruistic models and the development of novel, targeted interventions. We propose that an accelerated synthesis of computational approaches and social neuroscience represents a critical step towards a more comprehensive understanding of altruistic decision-making. We discuss the utility of this approach to study lifespan differences in social preference in late adulthood, a crucial future direction in aging global populations. Finally, we review potential pitfalls and recommendations for researchers interested in applying a computational approach to their research. This article is categorized under: Economics > Interactive Decision-Making Psychology > Emotion and Motivation Neuroscience > Cognition Economics > Individual Decision-Making.

3 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Abstract: The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.

47,133 citations

Journal ArticleDOI
TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.
Abstract: The problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion. These terms are a valid large-sample criterion beyond the Bayesian context, since they do not depend on the a priori distribution.

38,681 citations

Journal ArticleDOI
TL;DR: It is shown that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings, flexibility in data collection, analysis, and reporting dramatically increases actual false- positive rates, and a simple, low-cost, and straightforwardly effective disclosure-based solution is suggested.
Abstract: In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

4,727 citations

Journal ArticleDOI
Roger Ratcliff1
TL;DR: A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms, and it is noted that neural network models can be interfaced to the retrieval theory with little difficulty and that semantic memory models may benefit from such a retrieval scheme.
Abstract: A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes the search set on the basis of probe-memory item relatedness, just as a ringing tuning fork evokes sympathetic vibrations in other tuning forks. Evidence is accumulated in parallel from each probe-memory item comparison, and each comparison is modeled by a continuous random walk process. In item recognition, the decision process is self-terminating on matching comparisons and exhaustive on nonmatching comparisons. The mathematical model produces predictions about accuracy, mean reaction time, error latency, and reaction time distributions that are in good accord with experimental data. The theory is applied to four item recognition paradigms (Sternberg, prememorized list, study-test, and continuous) and to speed-accuracy paradigms; results are found to provide a basis for comparison of these paradigms. It is noted that neural network models can be interfaced to the retrieval theory with little difficulty and that semantic memory models may benefit from such a retrieval scheme.

3,856 citations

Journal ArticleDOI
TL;DR: The time course of perceptual choice is discussed in a model of gradual, leaky, stochastic, and competitive information accumulation in nonlinear decision units that captures choice behavior regardless of the number of alternatives, and explains a complex pattern of visual and contextual priming in visual word identification.
Abstract: The time course of perceptual choice is discussed in a model of gradual, leaky, stochastic, and competitive information accumulation in nonlinear decision units. Special cases of the model match a classical diffusion process, but leakage and competition work together to address several challenges to existing diffusion, random walk, and accumulator models. The model accounts for data from choice tasks using both time-controlled (e.g., response signal) and standard reaction time paradigms and its adequacy compares favorably with other approaches. A new paradigm that controls the time of arrival of information supporting different choice alternatives provides further support. The model captures choice behavior regardless of the number of alternatives, accounting for the log-linear relation between reaction time and number of alternatives (Hick's law) and explains a complex pattern of visual and contextual priming in visual word identification.

1,995 citations

Frequently Asked Questions (10)
Q1. What have the authors contributed in "Preregistration in complex contexts: a preregistration template for the application of cognitive models" ?

Specifically, the authors ( 1 ) discuss several potential researcher degrees of freedom within model application, ( 2 ) provide the first preregistration template for model application, and ( 3 ) provide an example of a preregistered model application using their preregistration template. More broadly, the authors hope that their discussions and proposals constructively advance the debate surrounding preregistration in cognitive modelling, and provide a guide for how preregistration templates may be developed in other diverse or complex research contexts. 

When considering the future of preregistration in cognitive modelling, the authors believe that mathematical psychology is one of the fields of psychology best suited to creating constrained preregistration documents. Therefore, the authors believe that future research efforts should focus on developing preregistration templates for the other categories of cognitive modelling proposed by Crüwell et al. ( 2019 ), either through extending their preregistration template for model application or by creating new preregistration templates. The authors believe that this system of iterative preregistrations for different categories of cognitive modelling within a single study provides the ideal balance between constraint and diversity, as researchers are free to investigate the data in as much detail as they wish, but each analysis performed is constrained. Cognitive modelling is a highly theory-driven field of research ( van Rooij, 2019 ), and the formal nature of cognitive models means that they make precise predictions about empirical data, which when compared to the widespread lack of theory in other parts of psychology ( Muthukrishna & Henrich, 2019 ) suggests that certain categories of cognitive modelling – model application, model comparison, and model evaluation – may lend themselves well to preregistration. 

in purely experimental studies, the aspect of power analysis is usually focused on the classic concept of statistical power (i.e., the probability of rejecting the null hypothesis given that it is false, within a null hypothesis significance testing framework). 

When considering the future of preregistration in cognitive modelling, the authors believe thatmathematical psychology is one of the fields of psychology best suited to creating constrained preregistration documents. 

An effective preregistration of the model and the model parameterisation can constrain these potential researcher degrees of freedom, which would help ensure more rigorous model application work. 

Any post-hoc addition to, or modulation of, the model or the parameterisation should be clearly labelled as exploratory rather than confirmatory. 

as the authors argued previously, the authors believe that preregistration is the best tool currently available for constraining researcher degrees of freedom, and the authors believe that model application studies may benefit from the use of preregistration and their template. 

the further development of method- and field-specific preregistration templates and checklists may improve the applicability of preregistration to areas of psychology research with more diverse hypotheses and more complex analyses, similar to how the development of general-purpose preregistration templates and checklists have helped researchers to create well-constrained preregistration documents for purely experimental studies. 

With specific templates for each category, a project including more than one modelling category could use the templates for each category to constrain the degrees of freedom in each part of the process, either simultaneously or sequentially. 

As in Evans et al. (2018) the parameters of a simple diffusion model will be estimated, namely only: drift rate (v), starting point (z), threshold (a), nondecision time (ter).