scispace - formally typeset
Open AccessJournal ArticleDOI

Data quality and reliability metrics for event-related potentials (ERPs): The utility of subject-level reliability.

Reads0
Chats0
TLDR
In this paper, the authors review three types of measurements metrics: data quality, group-level internal consistency, and subject level internal consistency and demonstrate how failing to consider data quality and internal consistency can undermine statistical inferences.
Citations
More filters
Journal ArticleDOI

Standardized measurement error: A universal metric of data quality for averaged event-related potentials.

TL;DR: In this paper, the authors proposed the standardized measurement error (SME), which is a special case of the standard error of measurement and can be applied to virtually any value that is derived from averaged ERP waveforms.
Journal ArticleDOI

The Data-Processing Multiverse of Event-Related Potentials (ERPs): A Roadmap for the Optimization and Standardization of ERP Processing and Reduction Pipelines

TL;DR: In this paper, a multiverse analysis of a data processing pipeline examines the impact of a large set of different reasonable choices to determine the robustness of effects, such as the effect of different decisions on between-trial standard deviations and between-condition differences (i.e., experimental effects).
Journal ArticleDOI

Using generalizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores part 1: Algorithms, framework, and implementation.

TL;DR: The ERP Reliability Analysis (ERA) toolbox as discussed by the authors is designed for estimating ERP score reliability using generalizability (G) theory, which is well suited for ERPs.
Journal ArticleDOI

Utility of linear mixed effects models for event-related potential research with infants and children

TL;DR: In this paper , an alternative approach to ERP analysis is proposed, called linear mixed effects (LME) modeling, which offers unique utility in developmental ERP research and has been shown to yield accurate, unbiased results even when subjects have low trial-counts.
Journal ArticleDOI

Using generalizability theory and the ERP reliability analysis (ERA) toolbox for assessing test-retest reliability of ERP scores part 2: Application to food-based tasks and stimuli.

TL;DR: The reliability of food-related ERPs has not been tested as mentioned in this paper, and the reliability of these ERPs may be improved with changes in task stimuli, task instructions, and study procedures.
References
More filters
Journal ArticleDOI

Making ERP research more transparent: Guidelines for preregistration

TL;DR: In this article, the authors present an overview of the problems associated with undisclosed analytic flexibility, discuss why and how EEG researchers would benefit from adopting preregistration, provide guidelines and examples on how to preregister data preprocessing and analysis steps in typical ERP studies, and conclude by discussing possibilities and limitations of this open science practice.
Journal ArticleDOI

Using multilevel models for the analysis of event-related potentials.

TL;DR: In this paper, the benefits of multilevel modeling for analyzing psychophysiological data, which often contains repeated observations within participants, and introduce some of the decision-making points in the analytic process, including how to set up the data set, specify the model, conduct hypothesis tests, and visualize the model estimates.
Journal ArticleDOI

Evaluating the internal consistency of subtraction-based and residualized difference scores: Considerations for psychometric reliability analyses of event-related potentials.

TL;DR: This article provided formulas from classical test theory and generalizability theory for estimating the internal consistency of subtraction-based and residualized difference scores, and applied these formulas to error-related negativity (ERN) and reward positivity (RewP) difference scores.
Journal ArticleDOI

Using generalizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores part 1: Algorithms, framework, and implementation.

TL;DR: The ERP Reliability Analysis (ERA) toolbox as discussed by the authors is designed for estimating ERP score reliability using generalizability (G) theory, which is well suited for ERPs.
Journal ArticleDOI

The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals

TL;DR: Additional exploratory analyses revealed that CI widths varied across psychological research areas and thatCI widths were not discernably decreasing over time, and the theoretical implications are discussed along with ways of reducing the CI widthS and thus improving precision of effect size estimation.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What contributions have the authors mentioned in the paper "Data quality and reliability metrics for event-related potentials (erps): the utility of subject-level reliability" ?

In this primer, the authors review three types of measurements metrics: data quality, group-level internal consistency, and subject-level internal consistency.Β Data quality estimates characterize the precision of ERP scores but provide no inherent information about whether scores are precise enough for examining individual differences.Β Group-level internal consistency characterizes the ratio of between-person differences to the precision of those scores, and provides a single reliability estimate for an entire group of participants that risks masking low reliability for some individuals.Β The authors apply each metric to published error-related negativity ( ERN ) and reward positivity ( RewP ) data and demonstrate how failing to consider data quality and internal consistency can undermine statistical inferences.Β The authors conclude with general comments on how these estimates may be used to improve measurement quality and methodological transparency.Β 

When withinperson variance is high relative to total variance, subject-level internal consistency will be closer to 0 (i.e., between-person variance is likely too low, given within-person variance, for examining individual differences).Β 

Subject-level internal consistency can be used to exclude participants with internalconsistency that is too low for an intended purpose.Β 

An advantage of computing split-half internalconsistency over coefficient alpha (i.e., Cronbach’s alpha) is that all available ERP scores are used in its estimation, while the estimation of coefficient alpha requires each participant to have the same number of trials.Β 

Although data quality estimates provide little useful information to justify comparingindividual differences with external correlates, they can help to justify the data quality is high enough to compare between-condition and between-group differences.Β 

Usinggeneralizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores Part 1: Algorithms, framework, and implementation.Β 

The formula for estimating subject-level internal consistency is an extension of the dependability formula from Equation 2.πœ™π‘—π‘˜ = πœŽπ‘2πœŽπ‘ 2 +πœŽπ‘–π‘—π‘˜ 2π‘›π‘–π‘—π‘˜ ⁄(Eq. 3)Subject-level dependability for a given person, j, from a group, k, (πœ™π‘—π‘˜) is computed as function of between-person variance (πœŽπ‘ 2), person-specific between-trial variance (πœŽπ‘–π‘—π‘˜ 2 ), and the personspecific number of included trials (π‘›π‘–π‘—π‘˜).Β 

If a researcher wishes to operate within the classical test theory framework, the average of randomly resampled5 split-half internal consistency coefficients could be estimated to characterize ERP score internal consistency (see Clayson et al., 2021).Β 

Estimates from generalizability theory can overcome this disadvantage by using ERPscores from all trials in the estimation of internal consistency, which removes the sampling error endemic to selecting an approach to split the data (Baldwin, Larson, & Clayson, 2015; Carbine et al., in press; Clayson, Carbine, et al., in press; Clayson & Miller, 2017a, 2017b).Β