scispace - formally typeset
Search or ask a question
Author

Nathan J. Evans

Bio: Nathan J. Evans is an academic researcher from University of Queensland. The author has contributed to research in topics: Elementary cognitive task & Transparency (behavior). The author has an hindex of 2, co-authored 4 publications receiving 6 citations. Previous affiliations of Nathan J. Evans include University of Amsterdam & University of Newcastle.

Papers
More filters
Journal ArticleDOI
TL;DR: The MOT task provides an effective manipulation of cognitive workload and the DRT is sensitive to changes in workload across a range of settings and is suitable to use outside of driving scenarios, as well as via online delivery.
Abstract: ObjectiveThe present research applied a well-established measure of cognitive workload in driving literature to an in-lab paradigm. We then extended this by comparing the in-lab version of the task...

9 citations

Posted ContentDOI
TL;DR: Open science practices have become increasingly popular in psychology and related sciences as discussed by the authors, and these practices aim to increase rigour and transparency in science as a potential respon- ture.
Abstract: In recent years, open science practices have become increasingly popular in psychology and related sciences. These practices aim to increase rigour and transparency in science as a potential respon...

8 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a more conclusive answer to the implications of emphasizing urgent responding, providing a re-analysis of 6 data sets from previous studies using two different EAMs -the diffusion model and the linear ballistic accumulator (LBA) -with state-of-the-art methods for model selection based inference.

7 citations

Posted ContentDOI
30 May 2021-medRxiv
TL;DR: In this article, the authors investigated which processing stages are affected in children with dyslexia when performing visual motion processing tasks, by combining two methods that are sensitive to the dynamic processes leading to responses.
Abstract: Children with and without dyslexia differ in their behavioural responses to visual information, particularly when required to pool dynamic signals over space and time. Importantly, multiple processes contribute to behavioural responses. Here we investigated which processing stages are affected in children with dyslexia when performing visual motion processing tasks, by combining two methods that are sensitive to the dynamic processes leading to responses. We used a diffusion model which decomposes response time and accuracy into distinct cognitive constructs, and high-density EEG. 50 children with dyslexia and 50 typically developing children aged 6 to 14 years judged the direction of motion as quickly and accurately as possible in two global motion tasks, which varied in their requirements for segregating signal-from-noise. Following our pre-registered analyses, we fitted hierarchical Bayesian diffusion models to the data, blinded to group membership. Unblinding revealed reduced evidence accumulation in children with dyslexia compared to typical children for both tasks. We also identified a response-locked EEG component which was maximal over centro-parietal electrodes which indicated a neural correlate of reduced drift-rate in dyslexia, thereby linking brain and behaviour. We suggest that children with dyslexia are slower to extract sensory evidence from global motion displays, regardless of whether they are required to segregate signal-from-noise, thus furthering our understanding of atypical perceptual decision-making processes in dyslexia.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is concluded that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.
Abstract: Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of “researcher degrees of freedom” aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called “OSF Preregistration,” http://osf.io/prereg/). The Prereg Challenge format was a “structured” workflow with detailed instructions and an independent review to confirm completeness; the “Standard” format was “unstructured” with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the “structured” format restricted the opportunistic use of researcher degrees of freedom better (Cliff’s Delta = 0.49) than the “unstructured” format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.

44 citations

Journal ArticleDOI
TL;DR: A multi-level meta-analysis found that age differences in drift rate are moderated both by task type and task difficulty, consistent with recent findings of a more pronounced age-related decline in memory than in vocabulary performance.
Abstract: Older adults typically show slower response times in basic cognitive tasks than younger adults. A diffusion model analysis allows the clarification of why older adults react more slowly by estimating parameters that map distinct cognitive components of decision making. The main components of the diffusion model are the speed of information uptake (drift rate), the degree of conservatism regarding the decision criterion (boundary separation), and the time taken up by non-decisional processes (i.e., encoding and motoric response execution; non-decision time). While the literature shows consistent results regarding higher boundary separation and longer non-decision time for older adults, results are more complex when it comes to age differences in drift rates. We conducted a multi-level meta-analysis to identify possible sources of this variance. As possible moderators, we included task difficulty and task type. We found that age differences in drift rate are moderated both by task type and task difficulty. Older adults were inferior in drift rate in perceptual and memory tasks, but information accumulation was even increased in lexical decision tasks for the older participants. Additionally, in perceptual and lexical decision tasks, older individuals benefitted from high task difficulty. In the memory tasks, task difficulty did not moderate the negative impact of age on drift. The finding of higher boundary separation and longer non-decision time in older than younger adults generalized over task type and task difficulty. The results of our meta-analysis are consistent with recent findings of a more pronounced age-related decline in memory than in vocabulary performance.

21 citations

Journal ArticleDOI
TL;DR: Open science practices, such as registration of hypotheses and analytic plans before data collection and sharing analytic code and materials, can help to address research practices that may threaten the transparency, reproducibility, and replicability of research as discussed by the authors .
Abstract: Suicide claims more than 700,000 lives globally every year (World Health Organization, 2021) and affects approximately 135 people per individual who dies by suicide (Cerel et al., 2019). Those affected by suicide – from people with lived experience to policy-makers – are depending on researchers to provide reliable evidence: a prerequisite of effective prevention and treatment. However, not all evidence is equal; studies with small sample sizes may produce spurious results (Carpenter & Law, 2021) and measures may be unable to capture suicidal thoughts and behaviors in a reliable and valid way (Millner et al., 2020), which can compromise the generalizability of findings. The quality of the research methods used to generate evidence is the key to determining the credibility we afford it (Vazire et al., 2021). Although we have undoubtedly made progress over the years in our understanding of suicide, recent research does not appear to have built upon previous work to the extent it could have done – mostly because of major methodological limitations in suicide research and publication bias limiting insights into the full range of existing findings (Franklin et al., 2017; Pirkis, 2020). To build onwhat has come before us, we need to be able to see what we are building on. Beyond unpublished nullfindings, there are many other reasons the evidence base is incomplete. Journal word limits may preclude sufficiently detailed descriptions of methods and statistical analysis to enable replication, abandoned research questions and analysis plans may not be reported as they make for a messier story, or after a long period of data collection, the original hypotheses and analysis plans may have become hazy, or could have changed based on knowledge of the data. How can we strengthen the foundations of our evidence base for the future and in doing so, “future-proof” suicide research?We can take active steps to tackle the problematic research practices described earlier, which threaten transparency (openness about the research process), reproducibility (obtaining the same results again using the same data), and replicability (obtaining similar results with identical methods in new studies) of research. Open science practices, including registration of hypotheses and analytic plans before data collection (preregistration) and sharing analytic code and materials, can help to address research practices that may threaten the transparency, reproducibility, and replicability of research (Munafò et al., 2017). Conversations about transparency, reproducibility, and replicability have just begun to blossom in clinical psychology and psychiatry research (Tackett et al., 2017, 2019), and have only recently begun to open up formally in suicide research (Carpenter & Law, 2021). Following a proposal by the International Association for Suicide Prevention (IASP) Early Career Group, Crisis recently adopted the Registered Reports (RRs) article format (Pirkis, 2020); Carpenter and Law (2021) published an introduction to open science for suicide researchers; and the authors of the current editorial presented a symposium on open science practices at the 2021 IASP World Congress. In this editorial, we use examples from our and others’ work to demonstrate the opportunities for future-proofing research by implementing open science practices, and we discuss some of the challenges and their potential solutions. We cover implementing open science practices in new, ongoing, and concluded studies, and discuss practices in order of being “low” to “high” threshold to implement (based on Kathawalla et al., 2021). Space constraints preclude us from covering all open science

8 citations

Journal ArticleDOI
TL;DR: Pre-registration is a research practice where a protocol is deposited in a repository before a scientific project is performed as discussed by the authors , and the protocol may be publicly visible immediately upon deposition or it may remain hidden until the work is completed/published.
Abstract: Pre-registration is a research practice where a protocol is deposited in a repository before a scientific project is performed. The protocol may be publicly visible immediately upon deposition or it may remain hidden until the work is completed/published. It may include the analysis plan, outcomes, and/or information about how evaluation of performance (e.g. forecasting ability) will be made, Pre-registration aims to enhance the trust one can put on scientific work. Deviations from the original plan, may still often be desirable, but pre-registration makes them transparent. While pre-registration has been advocated and used to variable extent in diverse types of research, there has been relatively little attention given to the possibility of pre-registration for mathematical modeling studies. Feasibility of pre-registration depends on the type of modeling and the ability to pre-specify processes and outcomes. In some types of modeling, in particular those that involve forecasting or other outcomes that can be appraised in the future, trust in model performance would be enhanced through pre-registration. Pre-registration can also be seen as a component of a larger suite of research practices that aim to improve documentation, transparency, and sharing-eventually allowing better reproducibility of the research work. The current commentary discusses the evolving landscape of the concept of pre-registration as it relates to different mathematical modeling activities, the potential advantages and disadvantages, feasibility issues, and realistic goals.

8 citations

Journal ArticleDOI
TL;DR: Pre-registration as discussed by the authors is a technique that allows scientists to declare a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known.
Abstract: Flexibility in the design, analysis and interpretation of scientific studies creates a multiplicity of possible research outcomes. Scientists are granted considerable latitude to selectively use and report the hypotheses, variables and analyses that create the most positive, coherent and attractive story while suppressing those that are negative or inconvenient. This creates a risk of bias that can lead to scientists fooling themselves and fooling others. Preregistration involves declaring a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known. Preregistration (1) reduces the risk of bias by encouraging outcome-independent decision-making and (2) increases transparency, enabling others to assess the risk of bias and calibrate their confidence in research outcomes. In this Perspective, we briefly review the historical evolution of preregistration in medicine, psychology and other domains, clarify its pragmatic functions, discuss relevant meta-research, and provide recommendations for scientists and journal editors.

7 citations