Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs.
TL;DR: In this paper, a method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed, and a meta-analysis procedure using design-specific estimates of sampling variance is described.
Abstract: When a meta-analysis on results from experimental studies is conducted, differences in the study design must be taken into consideration. A method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed. Combining results across designs requires that (a) all effect sizes be transformed into a common metric, (b) effect sizes from each design estimate the same treatment effect, and (c) meta-analysis procedures use design-specific estimates of sampling variance to reflect the precision of the effect size estimates.
Citations
More filters
•
01 Jun 2015TL;DR: A practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses and a detailed overview of the similarities and differences between within- and between-subjects designs is provided.
Abstract: Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA’s such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.
5,374 citations
••
TL;DR: The present study used meta-analytic techniques to determine the patterns of mean-level change in personality traits across the life course and showed that people increase in measures of social dominance, conscientiousness, and emotional stability in young adulthood and decrease in both of these domains in old age.
Abstract: The present study used meta-analytic techniques (number of samples = 92) to determine the patterns of mean-level change in personality traits across the life course. Results showed that people increase in measures of social dominance (a facet of extraversion), conscientiousness, and emotional stability, especially in young adulthood (age 20 to 40). In contrast, people increase on measures of social vitality (a 2nd facet of extraversion) and openness in adolescence but then decrease in both of these domains in old age. Agreeableness changed only in old age. Of the 6 trait categories, 4 demonstrated significant change in middle and old age. Gender and attrition had minimal effects on change, whereas longer studies and studies based on younger cohorts showed greater change.
2,791 citations
••
TL;DR: In this paper, the effect of unemployment on mental health was examined with meta-analytic methods across 237 cross-sectional and 87 longitudinal studies and the average overall effect size was d ǫ = 0.51 with unemployed persons showing more distress than employed persons.
2,019 citations
••
TL;DR: In this article, the authors compared three alternate effect size estimates for repeated measurements in both treatment and control groups, and found that the alternate measures of effect size were less accurate than the original measures.
Abstract: Previous research has recommended several measures of effect size for studies with repeated measurements in both treatment and control groups. Three alternate effect size estimates were compared in...
1,427 citations
••
TL;DR: In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
Abstract: Context Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. Objective To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. Data Source Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Study Selection Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. Data Extraction Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. Data Synthesis From a pool of 10 903 articles, we identified 609 eligible studies enrolling 35 226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I2>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. Conclusion In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
1,420 citations
References
More filters
•
01 Jan 1962TL;DR: In this article, the authors introduce the principles of estimation and inference: means and variance, means and variations, and means and variance of estimators and inferors, and the analysis of factorial experiments having repeated measures on the same element.
Abstract: CHAPTER 1: Introduction to Design CHAPTER 2: Principles of Estimation and Inference: Means and Variance CHAPTER 3: Design and Analysis of Single-Factor Experiments: Completely Randomized Design CHAPTER 4: Single-Factor Experiments Having Repeated Measures on the Same Element CHAPTER 5: Design and Analysis of Factorial Experiments: Completely-Randomized Design CHAPTER 6: Factorial Experiments: Computational Procedures and Numerical Example CHAPTER 7: Multifactor Experiments Having Repeated Measures on the Same Element CHAPTER 8: Factorial Experiments in which Some of the Interactions are Confounded CHAPTER 9: Latin Squares and Related Designs CHAPTER 10: Analysis of Covariance
25,607 citations
••
TL;DR: This chapter discusses design and analysis of single-Factor Experiments: Completely Randomized Design and Factorial Experiments in which Some of the Interactions are Confounded.
24,665 citations
"Combining effect size estimates in ..." refers result in this paper
...This is consistent with a compound symmetric error structure for repeated measures data (Winer, 1971)....
[...]
•
01 Jan 1979
11,977 citations
"Combining effect size estimates in ..." refers background in this paper
...To the extent that treatment or time affects individuals differentially (a subject by time interaction), scores will grow more or less variable over time (Cook & Campbell, 1979)....
[...]
•
01 Jan 1985
TL;DR: In this article, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Abstract: Preface. Introduction. Data Sets. Tests of Statistical Significance of Combined Results. Vote-Counting Methods. Estimation of a Single Effect Size: Parametric and Nonparametric Methods. Parametric Estimation of Effect Size from a Series of Experiments. Fitting Parametric Fixed Effect Models to Effect Sizes: Categorical Methods. Fitting Parametric Fixed Effect Models to Effect Sizes: General Linear Models. Random Effects Models for Effect Sizes. Multivariate Models for Effect Sizes. Combining Estimates of Correlation Coefficients. Diagnostic Procedures for Research Synthesis Models. Clustering Estimates of Effect Magnitude. Estimation of Effect Size When Not All Study Outcomes Are Observed. Meta-Analysis in the Physical and Biological Sciences. Appendix. References. Index.
9,769 citations
"Combining effect size estimates in ..." refers methods in this paper
...When the research base consists entirely of independent-groups designs, the calculation of effect sizes is straightforward and has been described in virtually every treatment of meta-analysis (Hedges & Olkin, 1985; Hunter & Schmidt, 1990; Rosenthal, 1991)....
[...]
••
TL;DR: In this paper, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Abstract: Preface. Introduction. Data Sets. Tests of Statistical Significance of Combined Results. Vote-Counting Methods. Estimation of a Single Effect Size: Parametric and Nonparametric Methods. Parametric Estimation of Effect Size from a Series of Experiments. Fitting Parametric Fixed Effect Models to Effect Sizes: Categorical Methods. Fitting Parametric Fixed Effect Models to Effect Sizes: General Linear Models. Random Effects Models for Effect Sizes. Multivariate Models for Effect Sizes. Combining Estimates of Correlation Coefficients. Diagnostic Procedures for Research Synthesis Models. Clustering Estimates of Effect Magnitude. Estimation of Effect Size When Not All Study Outcomes Are Observed. Meta-Analysis in the Physical and Biological Sciences. Appendix. References. Index.
7,063 citations