scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Technology-enhanced simulation for health professions education: a systematic review and meta-analysis

07 Sep 2011-JAMA (American Medical Association)-Vol. 306, Iss: 9, pp 978-988
TL;DR: In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
Abstract: Context Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. Objective To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. Data Source Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Study Selection Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. Data Extraction Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. Data Synthesis From a pool of 10 903 articles, we identified 609 eligible studies enrolling 35 226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I2>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. Conclusion In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
Citations
More filters
Journal ArticleDOI
TL;DR: Substantial evidence is provided that substituting high-quality simulation experiences for up to half of traditional clinical hours produces comparable end-of-program educational outcomes and new graduates that are ready for clinical practice.

952 citations


Cites background from "Technology-enhanced simulation for ..."

  • ...Systematic reviews and meta-analyses of the health care literature identify issues with a general lack of appropriately powered, rigorous studies (Cook et al., 2011; Issenberg, McGaghie, Petrusa, Gordon, & Scalese, 2005; Laschinger et al., 2008). Issenberg and colleagues’ (2010) review of 34 years of the medical simulation literature concluded, “While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings....

    [...]

  • ...Systematic reviews and meta-analyses of the health care literature identify issues with a general lack of appropriately powered, rigorous studies (Cook et al., 2011; Issenberg, McGaghie, Petrusa, Gordon, & Scalese, 2005; Laschinger et al., 2008). Issenberg and colleagues’ (2010) review of 34 years of the medical simulation literature concluded, “While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings.” Laschinger et al. (2008) attempted a meta-analysis of all health care literature to provide a synthesis of the evidence on the effectiveness of simulation in prelicensure education, including medicine, nursing, and rehabilitation therapy from 1995 to 2006....

    [...]

  • ...Systematic reviews and meta-analyses of the health care literature identify issues with a general lack of appropriately powered, rigorous studies (Cook et al., 2011; Issenberg, McGaghie, Petrusa, Gordon, & Scalese, 2005; Laschinger et al., 2008)....

    [...]

Journal ArticleDOI
TL;DR: A systematic review of studies comparing different simulation-based interventions confirmed quantitatively the effectiveness of several instructional design features in simulation- based education.
Abstract: Background: Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included original research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I 2 4 50%). For skills outcomes, pooled effect sizes ( positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p5 0.001), 0.68 for repetitive practice (7 studies; p ¼ 0.06), 0.66 for distributed practice (6 studies; p ¼ 0.03), 0.65 for interactivity (89 studies; p5 0.001), 0.62 for multiple learning strategies (70 studies; p5 0.001), 0.52 for individualized learning (59 studies; p5 0.001), 0.45 for mastery learning (3 studies; p ¼ 0.57), 0.44 for feedback (80 studies; p5 0.001), 0.34 for longer time (23 studies; p ¼ 0.005), 0.20 for clinical variation (16 studies; p ¼ 0.24), and � 0.22 for group training (8 studies; p ¼ 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education.

518 citations

Journal ArticleDOI
TL;DR: The aim was to review current serious games for training medical professionals and to evaluate the validity testing of such games.
Abstract: Background: The application of digital games for training medical professionals is on the rise. So-called ‘serious’ games form training tools that provide a challenging simulated environment, ideal for future surgical training. Ultimately, serious games are directed at reducing medical error and subsequent healthcare costs. The aim was to review current serious games for training medical professionals and to evaluate the validity testing of such games. Methods: PubMed, Embase, the Cochrane Database of Systematic Reviews, PsychInfo and CINAHL were searched using predefined inclusion criteria for available studies up to April 2012. The primary endpoint was validation according to current criteria. Results: A total of 25 articles were identified, describing a total of 30 serious games. The games were divided into two categories: those developed for specific educational purposes (17) and commercial games also useful for developing skills relevant to medical personnel (13). Pooling of data was not performed owing to the heterogeneity of study designs and serious games. Six serious games were identified that had a process of validation. Of these six, three games were developed for team training in critical care and triage, and three were commercially available games applied to train laparoscopic psychomotor skills. None of the serious games had completed a full validation process for the purpose of use. Conclusion: Blended and interactive learning by means of serious games may be applied to train both technical and non-technical skills relevant to the surgical field. Games developed or used for this purpose need validation before integration into surgical teaching curricula.

511 citations

Journal ArticleDOI
TL;DR: This work proposes a seven-step plan to overcome the barriers to effective team communication that incorporates education, psychological and organisational strategies and suggests this may be the next major advance in patient outcomes.
Abstract: Modern healthcare is delivered by multidisciplinary, distributed healthcare teams who rely on effective teamwork and communication to ensure effective and safe patient care. However, we know that there is an unacceptable rate of unintended patient harm, and much of this is attributed to failures in communication between health professionals. The extensive literature on teams has identified shared mental models, mutual respect and trust and closed-loop communication as the underpinning conditions required for effective teams. However, a number of challenges exist in the healthcare environment. We explore these in a framework of educational, psychological and organisational challenges to the development of effective healthcare teams. Educational interventions can promote a better understanding of the principles of teamwork, help staff understand each other's roles and perspectives, and help develop specific communication strategies, but may not be sufficient on their own. Psychological barriers, such as professional silos and hierarchies, and organisational barriers such as geographically distributed teams, can increase the chance of communication failures with the potential for patient harm. We propose a seven-step plan to overcome the barriers to effective team communication that incorporates education, psychological and organisational strategies. Recent evidence suggests that improvement in teamwork in healthcare can lead to significant gains in patient safety, measured against efficiency of care, complication rate and mortality. Interventions to improve teamwork in healthcare may be the next major advance in patient outcomes.

480 citations

References
More filters
Book
01 Dec 1969
TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Abstract: Contents: Prefaces. The Concepts of Power Analysis. The t-Test for Means. The Significance of a Product Moment rs (subscript s). Differences Between Correlation Coefficients. The Test That a Proportion is .50 and the Sign Test. Differences Between Proportions. Chi-Square Tests for Goodness of Fit and Contingency Tables. The Analysis of Variance and Covariance. Multiple Regression and Correlation Analysis. Set Correlation and Multivariate Methods. Some Issues in Power Analysis. Computational Procedures.

115,069 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure

37,989 citations

Journal ArticleDOI
TL;DR: In this article, the authors present guidelines for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges, and the confidence intervals for each of the forms are reviewed.
Abstract: Reliability coefficients often take the form of intraclass correlation coefficients. In this article, guidelines are given for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges. Relevant to the choice of the coefficient are the appropriate statistical model for the reliability and the application to be made of the reliability results. Confidence intervals for each of the forms are reviewed.

21,185 citations


"Technology-enhanced simulation for ..." refers methods in this paper

  • ...As thresholds for high or low quality scores, we used an NOS score of 4 (as described previously(15)) and the median of the MERSQI scores (12)....

    [...]

Trending Questions (1)
What are the outcomes of Jenaplan education?

The provided paper does not mention anything about Jenaplan education.