scispace - formally typeset
Open AccessJournal ArticleDOI

Data quality and reliability metrics for event-related potentials (ERPs): The utility of subject-level reliability.

Reads0
Chats0
TLDR
In this paper, the authors review three types of measurements metrics: data quality, group-level internal consistency, and subject level internal consistency and demonstrate how failing to consider data quality and internal consistency can undermine statistical inferences.
Citations
More filters
Journal ArticleDOI

Standardized measurement error: A universal metric of data quality for averaged event-related potentials.

TL;DR: In this paper, the authors proposed the standardized measurement error (SME), which is a special case of the standard error of measurement and can be applied to virtually any value that is derived from averaged ERP waveforms.
Journal ArticleDOI

The Data-Processing Multiverse of Event-Related Potentials (ERPs): A Roadmap for the Optimization and Standardization of ERP Processing and Reduction Pipelines

TL;DR: In this paper, a multiverse analysis of a data processing pipeline examines the impact of a large set of different reasonable choices to determine the robustness of effects, such as the effect of different decisions on between-trial standard deviations and between-condition differences (i.e., experimental effects).
Journal ArticleDOI

Using generalizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores part 1: Algorithms, framework, and implementation.

TL;DR: The ERP Reliability Analysis (ERA) toolbox as discussed by the authors is designed for estimating ERP score reliability using generalizability (G) theory, which is well suited for ERPs.
Journal ArticleDOI

Utility of linear mixed effects models for event-related potential research with infants and children

TL;DR: In this paper , an alternative approach to ERP analysis is proposed, called linear mixed effects (LME) modeling, which offers unique utility in developmental ERP research and has been shown to yield accurate, unbiased results even when subjects have low trial-counts.
Journal ArticleDOI

Using generalizability theory and the ERP reliability analysis (ERA) toolbox for assessing test-retest reliability of ERP scores part 2: Application to food-based tasks and stimuli.

TL;DR: The reliability of food-related ERPs has not been tested as mentioned in this paper, and the reliability of these ERPs may be improved with changes in task stimuli, task instructions, and study procedures.
References
More filters
Journal ArticleDOI

Psychometrics and the neuroscience of individual differences: Internal consistency limits between-subjects effects.

TL;DR: How variability in the internal consistency of neural measures limits between-subjects (i.e., individual differences) effects is demonstrated and internal consistency reliability should be routinely reported in all individual differences studies.
Journal ArticleDOI

Reliability of the ERN across multiple tasks as a function of increasing errors

TL;DR: Examination of the internal reliability of the ERN across the flankers, Stroop, and go/no-go tasks as a function of error number suggests that the flanker task might be prioritized when assessing the ERn.
Journal ArticleDOI

Assessing the internal consistency of the event-related potential: An example analysis.

TL;DR: It is concluded that the internal consistency and effect size of ERP findings greatly depend on the quantification strategy, the comparisons and analyses performed, and the SNR.
Journal ArticleDOI

Methodological reporting behavior, sample sizes, and statistical power in studies of event-related potentials: Barriers to reproducibility and replicability.

TL;DR: It is indicated that failing to report key guidelines is ubiquitous and that ERP studies are primarily powered to detect large effects, and such low power and insufficient following of reporting guidelines represent substantial barriers to replication efforts.
Journal ArticleDOI

Reduced neural response to reward and pleasant pictures independently relate to depression.

TL;DR: It is suggested that a blunted RewP and LPP reflect independent neural deficits in MDD – which could be used in conjunction to improve the classification of depression.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What contributions have the authors mentioned in the paper "Data quality and reliability metrics for event-related potentials (erps): the utility of subject-level reliability" ?

In this primer, the authors review three types of measurements metrics: data quality, group-level internal consistency, and subject-level internal consistency.Β Data quality estimates characterize the precision of ERP scores but provide no inherent information about whether scores are precise enough for examining individual differences.Β Group-level internal consistency characterizes the ratio of between-person differences to the precision of those scores, and provides a single reliability estimate for an entire group of participants that risks masking low reliability for some individuals.Β The authors apply each metric to published error-related negativity ( ERN ) and reward positivity ( RewP ) data and demonstrate how failing to consider data quality and internal consistency can undermine statistical inferences.Β The authors conclude with general comments on how these estimates may be used to improve measurement quality and methodological transparency.Β 

When withinperson variance is high relative to total variance, subject-level internal consistency will be closer to 0 (i.e., between-person variance is likely too low, given within-person variance, for examining individual differences).Β 

Subject-level internal consistency can be used to exclude participants with internalconsistency that is too low for an intended purpose.Β 

An advantage of computing split-half internalconsistency over coefficient alpha (i.e., Cronbach’s alpha) is that all available ERP scores are used in its estimation, while the estimation of coefficient alpha requires each participant to have the same number of trials.Β 

Although data quality estimates provide little useful information to justify comparingindividual differences with external correlates, they can help to justify the data quality is high enough to compare between-condition and between-group differences.Β 

Usinggeneralizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores Part 1: Algorithms, framework, and implementation.Β 

The formula for estimating subject-level internal consistency is an extension of the dependability formula from Equation 2.πœ™π‘—π‘˜ = πœŽπ‘2πœŽπ‘ 2 +πœŽπ‘–π‘—π‘˜ 2π‘›π‘–π‘—π‘˜ ⁄(Eq. 3)Subject-level dependability for a given person, j, from a group, k, (πœ™π‘—π‘˜) is computed as function of between-person variance (πœŽπ‘ 2), person-specific between-trial variance (πœŽπ‘–π‘—π‘˜ 2 ), and the personspecific number of included trials (π‘›π‘–π‘—π‘˜).Β 

If a researcher wishes to operate within the classical test theory framework, the average of randomly resampled5 split-half internal consistency coefficients could be estimated to characterize ERP score internal consistency (see Clayson et al., 2021).Β 

Estimates from generalizability theory can overcome this disadvantage by using ERPscores from all trials in the estimation of internal consistency, which removes the sampling error endemic to selecting an approach to split the data (Baldwin, Larson, & Clayson, 2015; Carbine et al., in press; Clayson, Carbine, et al., in press; Clayson & Miller, 2017a, 2017b).Β