scispace - formally typeset
Open AccessJournal ArticleDOI

How to get statistically significant effects in any ERP experiment (and why you shouldn't)

Steven J. Luck, +1 more
- 01 Jan 2017 - 
- Vol. 54, Iss: 1, pp 146-157
Reads0
Chats0
TLDR
How common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects is demonstrated, with the likelihood of obtaining at least one bogus effect exceeding 50% in many experiments.
Abstract
ERP experiments generate massive datasets, often containing thousands of values for each participant, even after averaging. The richness of these datasets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand-averaged data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multifactor statistical analyses. Reanalyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant but bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions.

read more

Content maybe subject to copyright    Report

Citations
More filters

An Introduction To The Event Related Potential Technique

Marina Schmid
TL;DR: This is an introduction to the event related potential technique, which can help people facing with some malicious bugs inside their laptop to read a good book with a cup of tea in the afternoon.
Journal ArticleDOI

Combined Electrophysiological and Behavioral Evidence for the Suppression of Salient Distractors.

TL;DR: These findings provide a crucial connection between the behavioral and neural measures of suppression, which opens the door to using the PD component to assess the timing and neural substrates of the behaviorally observed suppression.
Journal ArticleDOI

The interpretation of mu suppression as an index of mirror neuron activity: Past, present and future

TL;DR: Several key potential shortcomings with the use and interpretation of mu suppression, documented in the older literature and highlighted by more recent reports, are explored here.
Journal ArticleDOI

Sample size calculations in human electrophysiology (EEG and ERP) studies: A systematic review and recommendations for increased rigor.

TL;DR: Absence of such information hinders accurate determination of sample sizes for study design, grant applications, and meta-analyses of research and whether studies were adequately powered to detect effects of interest.
References
More filters
Journal Article

R: A language and environment for statistical computing.

R Core Team
- 01 Jan 2014 - 
TL;DR: Copyright (©) 1999–2012 R Foundation for Statistical Computing; permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and permission notice are preserved on all copies.
Journal ArticleDOI

FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data

TL;DR: FieldTrip is an open source software package that is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data.
Journal ArticleDOI

Nonparametric statistical testing of EEG- and MEG-data

TL;DR: This paper forms a null hypothesis and shows that the nonparametric test controls the false alarm rate under this null hypothesis, enabling neuroscientists to construct their own statistical test, maximizing the sensitivity to the expected effect.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Journal ArticleDOI

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 - 
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Related Papers (5)