scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A critical systematic review of the Neurotracker perceptual-cognitive training tool

05 Apr 2021-Psychonomic Bulletin & Review (Springer US)-Vol. 28, Iss: 5, pp 1458-1483
TL;DR: The sport science debate regarding the value of general cognitive skill training, based on tools such as Neurotracker, versus sport-specific skill training is summarized and the several hundred MOT publications from the last 30 years suggest that the abilities underlying object tracking are not those advertised by the Neurotr tracker manufacturers.
Abstract: In this systematic review, we evaluate the scientific evidence behind "Neurotracker," one of the most popular perceptual-cognitive training tools in sports. The tool, which is also used in rehabilitation and aging research to examine cognitive abilities, uses a 3D multiple object-tracking (MOT) task. In this review, we examine Neurotracker from both a sport science and a basic science perspective. We first summarize the sport science debate regarding the value of general cognitive skill training, based on tools such as Neurotracker, versus sport-specific skill training. We then consider the several hundred MOT publications in cognitive and vision science from the last 30 years that have investigated cognitive functions and object tracking processes. This literature suggests that the abilities underlying object tracking are not those advertised by the Neurotracker manufacturers. With a systematic literature search, we scrutinize the evidence for whether general cognitive skills can be tested and trained with Neurotracker and whether these trained skills transfer to other domains. The literature has major limitations, for example a total absence of preregistered studies, which makes the evidence for improvements for working memory and sustained attention very weak. For other skills as well, the effects are mixed. Only three studies investigated far transfer to ecologically valid tasks, two of which did not find any effect. We provide recommendations for future Neurotracker research to improve the evidence base and for making better use of sport and basic science findings.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
13 Aug 2021
TL;DR: In this article, several off-court technology-based interventions have been developed to train perceptual cognitive skills in soccer players, such as making quick decisions in dynamic environments, but none of them have been applied in the real world.
Abstract: Soccer requires athletes to make quick decisions in dynamic environments. Several off-court technology-based interventions have been developed to train these perceptual cognitive skills. However, t...

6 citations

Journal ArticleDOI
TL;DR: There are a myriad of interventions promoting activities designed to help enhance sustained attention in children and adolescents as discussed by the authors , including cognitive attention training, meditation training, and physical activity intervention studies aimed at improving sustained attention (randomised-controlled or non-randomized-controlled designs).

3 citations

Journal ArticleDOI
TL;DR: An open-source cognitive test battery to assess attention and memory, using a javascript library, p5.js, that captures diverse individual differences and can evaluate them based on the cognitive factors extracted from latent factor analysis.
Abstract: Cognitive test batteries are widely used in diverse research fields, such as cognitive training, cognitive disorder assessment, or brain mechanism understanding. Although they need flexibility according to their usage objectives, most test batteries are not available as open-source software and are not be tuned by researchers in detail. The present study introduces an open-source cognitive test battery to assess attention and memory, using a javascript library, p5.js. Because of the ubiquitous nature of dynamic attention in our daily lives, it is crucial to have tools for its assessment or training. For that purpose, our test battery includes seven cognitive tasks (multiple-objects tracking, enumeration, go/no-go, load-induced blindness, task-switching, working memory, and memorability), common in cognitive science literature. By using the test battery, we conducted an online experiment to collect the benchmark data. Results conducted on 2 separate days showed the high cross-day reliability. Specifically, the task performance did not largely change with the different days. Besides, our test battery captures diverse individual differences and can evaluate them based on the cognitive factors extracted from latent factor analysis. Since we share our source code as open-source software, users can expand and manipulate experimental conditions flexibly. Our test battery is also flexible in terms of the experimental environment, i.e., it is possible to experiment either online or in a laboratory environment.

2 citations

Journal ArticleDOI
TL;DR: Wearable devices have been used to monitor movement-related physiological indices, including heartbeat, movement, and other exercise metrics, for health purposes as discussed by the authors , and people are also paying more attention to mental health issues, such as stress management.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Book
01 Jan 2001
TL;DR: In this article, the authors present experiments and generalized Causal inference methods for single and multiple studies, using both control groups and pretest observations on the outcome of the experiment, and a critical assessment of their assumptions.
Abstract: 1. Experiments and Generalized Causal Inference 2. Statistical Conclusion Validity and Internal Validity 3. Construct Validity and External Validity 4. Quasi-Experimental Designs That Either Lack a Control Group or Lack Pretest Observations on the Outcome 5. Quasi-Experimental Designs That Use Both Control Groups and Pretests 6. Quasi-Experimentation: Interrupted Time Series Designs 7. Regression Discontinuity Designs 8. Randomized Experiments: Rationale, Designs, and Conditions Conducive to Doing Them 9. Practical Problems 1: Ethics, Participant Recruitment, and Random Assignment 10. Practical Problems 2: Treatment Implementation and Attrition 11. Generalized Causal Inference: A Grounded Theory 12. Generalized Causal Inference: Methods for Single Studies 13. Generalized Causal Inference: Methods for Multiple Studies 14. A Critical Assessment of Our Assumptions

12,215 citations

Journal ArticleDOI
TL;DR: It is shown that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings, flexibility in data collection, analysis, and reporting dramatically increases actual false- positive rates, and a simple, low-cost, and straightforwardly effective disclosure-based solution is suggested.
Abstract: In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

4,727 citations