scispace - formally typeset
Search or ask a question
Journal ArticleDOI

#EEGManyLabs : investigating the replicability of influential EEG experiments

University of Tübingen1, University of Aberdeen2, Max Planck Society3, University of Sheffield4, University of Dundee5, Dresden University of Technology6, Manchester Metropolitan University7, University of Miami8, Heidelberg University9, University of Münster10, University of South Florida11, University of Birmingham12, Stockholm School of Economics13, Université de Montréal14, University of Stuttgart15, University of Plymouth16, Bournemouth University17, Complutense University of Madrid18, Radboud University Nijmegen19, University of Toronto20, Australian National University21, University of Winchester22, National Research University – Higher School of Economics23, Humboldt University of Berlin24, University of Cambridge25, University of Florida26, University of Hamburg27, University of Leeds28, University of Erlangen-Nuremberg29, University of Pittsburgh30, University of Zurich31, Ludwig Maximilian University of Munich32, Autonomous University of Madrid33, Texas A&M University34, Tel Aviv University35, University of Mainz36, University of Texas of the Permian Basin37, Stockholm University38, Technical University of Madrid39, Ruhr University Bochum40, University of Edinburgh41, Ghent University42, University of Glasgow43, University of Texas at Tyler44, Monash University Malaysia Campus45, Jagiellonian University46, University of Nevada, Las Vegas47, University of Oslo48, Florida Atlantic University49, University of Groningen50, Katholieke Universiteit Leuven51, University of Iowa Hospitals and Clinics52, Russian Academy53
02 Apr 2021-Cortex (Elsevier)-Vol. 144, pp 213-229
TL;DR: The #EEGManyLabs project as discussed by the authors is a large-scale international collaborative replication effort that aims to evaluate the replicability of EEG findings about the relationship between brain activity and cognitive phenomena.
About: This article is published in Cortex.The article was published on 2021-04-02 and is currently open access. It has received 43 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors provide an integrated overview of community-developed resources that can support collaborative, open, reproducible, replicable, robust and generalizable neuroimaging throughout the entire research cycle from inception to publication.

24 citations

Journal ArticleDOI
TL;DR: This Special Issue points out interesting divides in the study of rhythmic sampling across different domains and highlights the importance of publishing negative findings and replications to improve the understanding of the role of rhythms in cognition.
Abstract: Do humans perceive the world through discrete sampling of the sensory environment? Although it contrasts starkly with the intuition of a continuous perceptual flow, this idea dates back decades when brain rhythms were first suggested to work as periodic shutters. These would gate bouts of information into conscious perception and affect behavioural responses to sensory events. Seminal experimental findings have since largely confirmed brain rhythms as the neural implementation of periodic sampling. However, novel methods, improved experimental designs, and innovative analytical approaches show that the exact roles and functional significance of rhythmic brain activity for cognition remain to be determined. In re-visiting the evidence for rhythmic sampling, the contributions to this Special Issue gave a mixed picture: Studies testing for rhythmic patterns in behavioural performance largely supported the notion. However, at odds with previous results, most attempts to link behavioural outcomes with the phase of neural rhythms did not find supporting evidence. Also, contrasting earlier results, studies that used external sensory or electrical stimulation to control neural phase ('entrainment') failed to find support for rhythmic sampling in behavioural performance despite other research, included here, that reported neural indicators of entrainment. This Special Issue therefore points out interesting divides in the study of rhythmic sampling across different domains and highlights the importance of publishing negative findings and replications to improve our understanding of the role of rhythms in cognition.

18 citations

Journal ArticleDOI
TL;DR: In this paper , the authors provide a list of 48 wireless EEG devices along with a number of important-sometimes difficult-to-obtain-features and characteristics to enable their side-by-side comparison and a brief introduction to each of these aspects and how they may influence one's decision.

15 citations

Journal ArticleDOI
TL;DR: In this paper , the authors argue that the field lacks a widely accepted definition of interbrain synchrony, and that a potpourri of tasks and empirical methods permits undue flexibility when testing the hypothesis.

14 citations

Journal ArticleDOI
01 Mar 2022-ENeuro
TL;DR: In this paper , the authors conducted a systematic review and the first quantitative meta-analysis of fNIRS hyperscanning of cooperation, based on thirteen studies with 890 human participants.
Abstract: Single-brain neuroimaging studies have shown that human cooperation is associated with neural activity in frontal and temporoparietal regions. However, it remains unclear whether single-brain studies are informative about cooperation in real life, where people interact dynamically. Such dynamic interactions have become the focus of interbrain studies. An advantageous technique in this regard is functional near-infrared spectroscopy (fNIRS) because it is less susceptible to movement artifacts than more conventional techniques like electroencephalography (EEG) or functional magnetic resonance imaging (fMRI). We conducted a systematic review and the first quantitative meta-analysis of fNIRS hyperscanning of cooperation, based on thirteen studies with 890 human participants. Overall, the meta-analysis revealed evidence of statistically significant interbrain synchrony while people were cooperating, with large overall effect sizes in both frontal and temporoparietal areas. All thirteen studies observed significant interbrain synchrony in the prefrontal cortex (PFC), suggesting that this region is particularly relevant for cooperative behavior. The consistency in these findings is unlikely to be because of task-related activations, given that the relevant studies used diverse cooperation tasks. Together, the present findings support the importance of interbrain synchronization of frontal and temporoparietal regions in interpersonal cooperation. Moreover, the present article highlights the usefulness of meta-analyses as a tool for discerning patterns in interbrain dynamics.

12 citations

References
More filters
Journal ArticleDOI
TL;DR: FieldTrip is an open source software package that is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data.
Abstract: This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.

7,963 citations

Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations

Journal ArticleDOI
TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Abstract: A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.

5,683 citations

Journal ArticleDOI
28 Aug 2015-Science
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Abstract: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

5,532 citations

Journal ArticleDOI
01 Aug 2005-Chance
TL;DR: In this paper, the authors discuss the implications of these problems for the conduct and interpretation of research and conclude that the probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and the ratio of true to no relationships among the relationships probed in each scientifi c fi eld.
Abstract: Summary There is increasing concern that most current published research fi ndings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientifi c fi eld. In this framework, a research fi nding is less likely to be true when the studies conducted in a fi eld are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater fl exibility in designs, defi nitions, outcomes, and analytical modes; when there is greater fi nancial and other interest and prejudice; and when more teams are involved in a scientifi c fi eld in chase of statistical signifi cance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientifi c fi elds, claimed research fi ndings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research. It can be proven that most claimed research fi ndings are false.

4,999 citations