scispace - formally typeset
Open AccessJournal ArticleDOI

Data quality and reliability metrics for event-related potentials (ERPs): The utility of subject-level reliability.

Reads0
Chats0
TLDR
In this paper, the authors review three types of measurements metrics: data quality, group-level internal consistency, and subject level internal consistency and demonstrate how failing to consider data quality and internal consistency can undermine statistical inferences.
Citations
More filters
Journal ArticleDOI

Standardized measurement error: A universal metric of data quality for averaged event-related potentials.

TL;DR: In this paper, the authors proposed the standardized measurement error (SME), which is a special case of the standard error of measurement and can be applied to virtually any value that is derived from averaged ERP waveforms.
Journal ArticleDOI

The Data-Processing Multiverse of Event-Related Potentials (ERPs): A Roadmap for the Optimization and Standardization of ERP Processing and Reduction Pipelines

TL;DR: In this paper, a multiverse analysis of a data processing pipeline examines the impact of a large set of different reasonable choices to determine the robustness of effects, such as the effect of different decisions on between-trial standard deviations and between-condition differences (i.e., experimental effects).
Journal ArticleDOI

Using generalizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores part 1: Algorithms, framework, and implementation.

TL;DR: The ERP Reliability Analysis (ERA) toolbox as discussed by the authors is designed for estimating ERP score reliability using generalizability (G) theory, which is well suited for ERPs.
Journal ArticleDOI

Utility of linear mixed effects models for event-related potential research with infants and children

TL;DR: In this paper , an alternative approach to ERP analysis is proposed, called linear mixed effects (LME) modeling, which offers unique utility in developmental ERP research and has been shown to yield accurate, unbiased results even when subjects have low trial-counts.
Journal ArticleDOI

Using generalizability theory and the ERP reliability analysis (ERA) toolbox for assessing test-retest reliability of ERP scores part 2: Application to food-based tasks and stimuli.

TL;DR: The reliability of food-related ERPs has not been tested as mentioned in this paper, and the reliability of these ERPs may be improved with changes in task stimuli, task instructions, and study procedures.
References
More filters
Journal ArticleDOI

Confidence intervals for standardized linear contrasts of means.

TL;DR: The proposed confidence interval methods are easy to compute, do not require equal population variances, and perform better than the currently available methods when the populationvariances are not equal.
Journal ArticleDOI

Using trial-level data and multilevel modeling to investigate within-task change in event-related potentials.

TL;DR: The advantages of using multilevel modeling (MLM) to examine trial-level data to investigate change in neurocognitive processes across the course of an experiment are presented and the potential to contribute to a number of different theoretical domains within psychology is presented.
Journal ArticleDOI

Comparing the error-related negativity across groups: The impact of error- and trial-number differences.

TL;DR: It is demonstrated that, across participants, the number of errors correlates with the amplitude of the ERN independently of the numberof errors included in ERN quantification per participant, constituting a possible confound when such variance is unaccounted for.
Journal ArticleDOI

Neuroimaging measures of error-processing: Extracting reliable signals from event-related potentials and functional magnetic resonance imaging.

TL;DR: The goal here is to provide data compiled from a large sample of healthy participants performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages.
Journal ArticleDOI

Psychometric properties of neural responses to monetary and social rewards across development

TL;DR: These data support the use of reward-related ERPs elicited by multiple reward types in studies of biomarkers of psychopathology and provides novel information about the psychometric properties of the social RewP/FN.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What contributions have the authors mentioned in the paper "Data quality and reliability metrics for event-related potentials (erps): the utility of subject-level reliability" ?

In this primer, the authors review three types of measurements metrics: data quality, group-level internal consistency, and subject-level internal consistency.Β Data quality estimates characterize the precision of ERP scores but provide no inherent information about whether scores are precise enough for examining individual differences.Β Group-level internal consistency characterizes the ratio of between-person differences to the precision of those scores, and provides a single reliability estimate for an entire group of participants that risks masking low reliability for some individuals.Β The authors apply each metric to published error-related negativity ( ERN ) and reward positivity ( RewP ) data and demonstrate how failing to consider data quality and internal consistency can undermine statistical inferences.Β The authors conclude with general comments on how these estimates may be used to improve measurement quality and methodological transparency.Β 

When withinperson variance is high relative to total variance, subject-level internal consistency will be closer to 0 (i.e., between-person variance is likely too low, given within-person variance, for examining individual differences).Β 

Subject-level internal consistency can be used to exclude participants with internalconsistency that is too low for an intended purpose.Β 

An advantage of computing split-half internalconsistency over coefficient alpha (i.e., Cronbach’s alpha) is that all available ERP scores are used in its estimation, while the estimation of coefficient alpha requires each participant to have the same number of trials.Β 

Although data quality estimates provide little useful information to justify comparingindividual differences with external correlates, they can help to justify the data quality is high enough to compare between-condition and between-group differences.Β 

Usinggeneralizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores Part 1: Algorithms, framework, and implementation.Β 

The formula for estimating subject-level internal consistency is an extension of the dependability formula from Equation 2.πœ™π‘—π‘˜ = πœŽπ‘2πœŽπ‘ 2 +πœŽπ‘–π‘—π‘˜ 2π‘›π‘–π‘—π‘˜ ⁄(Eq. 3)Subject-level dependability for a given person, j, from a group, k, (πœ™π‘—π‘˜) is computed as function of between-person variance (πœŽπ‘ 2), person-specific between-trial variance (πœŽπ‘–π‘—π‘˜ 2 ), and the personspecific number of included trials (π‘›π‘–π‘—π‘˜).Β 

If a researcher wishes to operate within the classical test theory framework, the average of randomly resampled5 split-half internal consistency coefficients could be estimated to characterize ERP score internal consistency (see Clayson et al., 2021).Β 

Estimates from generalizability theory can overcome this disadvantage by using ERPscores from all trials in the estimation of internal consistency, which removes the sampling error endemic to selecting an approach to split the data (Baldwin, Larson, & Clayson, 2015; Carbine et al., in press; Clayson, Carbine, et al., in press; Clayson & Miller, 2017a, 2017b).Β