scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review

01 Jan 2005-Medical Teacher (Taylor & Francis)-Vol. 27, Iss: 1, pp 10-28
TL;DR: While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings.
Abstract: SUMMARY Review date: 1969 to 2003, 34 years. Background and context: Simulations are now in widespread use in medical education and medical personnel evaluation. Outcomes research on the use and effectiveness of simulation technology in medical education is scattered, inconsistent and varies widely in methodological rigor and substantive focus. Objectives: Review and synthesize existing evidence in educational science that addresses the question, ‘What are the features and uses of high-fidelity medical simulations that lead to most effective learning?’. Search strategy: The search covered five literature databases (ERIC, MEDLINE, PsycINFO, Web of Science and Timelit) and employed 91 single search terms and concepts and their Boolean combinations. Hand searching, Internet searches and attention to the ‘grey literature’ were also used. The aim was to perform the most thorough literature search possible of peer-reviewed publications and reports in the unpublished literature that have been judged for academic quality. Inclusion and exclusion criteria: Four screening criteria were used to reduce the initial pool of 670 journal articles to a focused set of 109 studies: (a) elimination of review articles in favor of empirical studies; (b) use of a simulator as an educational assessment or intervention with learner outcomes measured quantitatively; (c) comparative research, either experimental or quasi-experimental; and (d) research that involves simulation as an educational intervention. Data extraction: Data were extracted systematically from the 109 eligible journal articles by independent coders. Each coder used a standardized data extraction protocol. Data synthesis: Qualitative data synthesis and tabular presentation of research methods and outcomes were used. Heterogeneity of research designs, educational interventions, outcome measures and timeframe precluded data synthesis using meta-analysis. Headline results: Coding accuracy for features of the journal articles is high. The extant quality of the published research is generally weak. The weight of the best available evidence suggests that high-fidelity medical simulations facilitate learning under the right conditions. These include the following:

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This article reviews and critically evaluates historical and contemporary research on simulation‐based medical education (SBME) and presents and discusses 12 features and best practices that teachers should know in order to use medical simulation technology to maximum educational benefit.
Abstract: Objectives This article reviews and critically evaluates historical and contemporary research on simulation-based medical education (SBME). It also presents and discusses 12 features and best practices of SBME that teachers should know in order to use medical simulation technology to maximum educational benefit. Methods This qualitative synthesis of SBME research and scholarship was carried out in two stages. Firstly, we summarised the results of three SBME research reviews covering the years 1969–2003. Secondly, we performed a selective, critical review of SBME research and scholarship published during 2003–2009. Results The historical and contemporary research synthesis is reported to inform the medical education community about 12 features and best practices of SBME: (i) feedback; (ii) deliberate practice; (iii) curriculum integration; (iv) outcome measurement; (v) simulation fidelity; (vi) skill acquisition and maintenance; (vii) mastery learning; (viii) transfer to practice; (ix) team training; (x) high-stakes testing; (xi) instructor training, and (xii) educational and professional context. Each of these is discussed in the light of available evidence. The scientific quality of contemporary SBME research is much improved compared with the historical record. Conclusions Development of and research into SBME have grown and matured over the past 40 years on substantive and methodological grounds. We believe the impact and educational utility of SBME are likely to increase in the future. More thematic programmes of research are needed. Simulation-based medical education is a complex service intervention that needs to be planned and practised with attention to organisational contexts. Medical Education 2010: 44: 50–63

1,459 citations

Journal ArticleDOI
07 Sep 2011-JAMA
TL;DR: In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
Abstract: Context Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. Objective To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. Data Source Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Study Selection Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. Data Extraction Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. Data Synthesis From a pool of 10 903 articles, we identified 609 eligible studies enrolling 35 226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I2>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. Conclusion In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.

1,420 citations


Cites background or methods from "Features and uses of high-fidelity ..."

  • ...Criterion B was fulfilled if (1) a randomized study concealed allocation or (2) an observational study controlled for another baseline learner characteristic....

    [...]

  • ...We sought to answer 2 questions: (1) To what extent are simulation technologies for training health care professionals associated with improved outcomes in comparison with no intervention? and (2) How do outcomes vary for different simulation instructional designs? Based on the strength of the theoretical foundations and currency in the field, we selected 5 instructional design features(2,9) (curricular integration, distributed practice, feedback, mastery learning, and range of difficulty) for subgroup analyses (see eBox for definitions; available at http://www ....

    [...]

  • ...Beyond descriptive analysis (2) 560 (91....

    [...]

  • ...When authors reported multiple measures of a single outcome (eg, multiple measures of efficiency), we selected in decreasing order of priority (1) the authordefined primary outcome; (2) a global or summary measure of effect; (3) the most clinically relevant measure; or (4) the mean of the measures reported....

    [...]

  • ...cComparability of cohorts criterion A was fulfilled if the study (1) was randomized or (2) controlled for a baseline learning outcome....

    [...]

Journal ArticleDOI
TL;DR: The aim of this paper is to critically review what is felt to be important about the role of debriefing in the field of simulation-based learning, how it has come about and developed over time, and the different styles or approaches that are used and how effective the process is.
Abstract: The aim of this paper is to critically review what is felt to be important about the role of debriefing in the field of simulation-based learning, how it has come about and developed over time, and the different styles or approaches that are used and how effective the process is. A recent systematic

1,351 citations

Journal ArticleDOI
TL;DR: Although the number of reports analyzed in this meta-analysis is small, these results show that SBME with DP is superior to traditional clinical medical education in achieving specific clinical skill acquisition goals.
Abstract: Purpose This article presents a comparison of the effectiveness of traditional clinical education toward skill acquisition goals versus simulation-based medical education (SBME) with deliberate practice (DP). Method This is a quantitative meta-analysis that spans 20 years, 1990 to 2010. A search strategy involving three literature databases, 12 search terms, and four inclusion criteria was used. Four authors independently retrieved and reviewed articles. Main outcome measures were extracted to calculate effect sizes.

1,311 citations

Journal ArticleDOI
10 Sep 2008-JAMA
TL;DR: Internet-based learning is associated with large positive effects compared with no intervention and with non-Internet instructional methods, suggesting effectiveness similar to traditional methods.
Abstract: Context The increasing use of Internet-based learning in health professions education may be informed by a timely, comprehensive synthesis of evidence of effectiveness. Objectives To summarize the effect of Internet-based instruction for health professions learners compared with no intervention and with non-Internet interventions. Data Sources Systematic search of MEDLINE, Scopus, CINAHL, EMBASE, ERIC, TimeLit, Web of Science, Dissertation Abstracts, and the University of Toronto Research and Development Resource Base from 1990 through 2007. Study Selection Studies in any language quantifying the association of Internet-based instruction and educational outcomes for practicing and student physicians, nurses, pharmacists, dentists, and other health care professionals compared with a no-intervention or non-Internet control group or a preintervention assessment. Data Extraction Two reviewers independently evaluated study quality and abstracted information including characteristics of learners, learning setting, and intervention (including level of interactivity, practice exercises, online discussion, and duration). Data Synthesis There were 201 eligible studies. Heterogeneity in results across studies was large (I2 ≥ 79%) in all analyses. Effect sizes were pooled using a random effects model. The pooled effect size in comparison to no intervention favored Internet-based interventions and was 1.00 (95% confidence interval [CI], 0.90-1.10; P Conclusions Internet-based learning is associated with large positive effects compared with no intervention. In contrast, effects compared with non-Internet instructional methods are heterogeneous and generally small, suggesting effectiveness similar to traditional methods. Future research should directly compare different Internet-based interventions.

1,241 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a study was conducted to determine if laparoscopic skills training using simulated tasks on a video-trainer improves the operative performance of junior surgery residents, and the results showed that intensive training improves video-eye-hand skills and translates into improved operative performance.
Abstract: Background: Developing technical skill is essential to surgical training, but using the operating room for basic skill acquisition may be inefficient and expensive, especially for laparoscopic operations This study determines if laparoscopic skills training using simulated tasks on a video-trainer improves the operative performance of surgery residents Study Design: Second- and third-year residents (n = 27) were prospectively randomized to receive formal laparoscopic skills training or to a control group At baseline, residents had a validated global assessment of their ability to perform a laparoscopic cholecystectomy based on direct observation by three evaluators who were blinded to the residents' randomization status Residents were also tested on five standardized video-trainer tasks The training group practiced the video-trainer tasks as a group for 30 minutes daily for 10 days The control group received no formal training All residents repeated the video-trainer test and underwent a second global assessment by the same three blinded evaluators at the end of the 1-month rotation Within-person improvement was determined; improvement was adjusted for differences in baseline performance Results: Five residents were unable to participate because of scheduling problems; 9 residents in the training group and 13 residents in the control group completed the study Baseline laparoscopic experience, video-trainer scores, and global assessments were not significantly different between the two groups The training group on average practiced the video-trainer tasks 138 times (range 94 to 171 times); the control group did not practice any task The trained group achieved significantly greater adjusted improvement in video-trainer scores (five of five tasks) and global assessments (four of eight criteria) over the course of the four-week curriculum, compared with controls Conclusions: Intense training improves video-eye-hand skills and translates into improved operative performance for junior surgery residents Surgical curricula should contain laparoscopic skills training

651 citations

Journal ArticleDOI
TL;DR: In this paper, the authors developed a series of structured tasks to objectively measure laparoscopic skills and used a linear regression model to test for the effects of level of training and practice on performance.
Abstract: Background: Interest in the training and evaluation of laparoscopic skills is extending beyond the realm of the operating room to the use of laparoscopic simulators. The purpose of this study was to develop a series of structured tasks to objectively measure laparoscopic skills. This model was then used to test for the effects of level of training and practice on performance. Methods: Forty-two subjects (6 each of surgical residents PGY1 to PGY5, 6 surgeons who practice laparoscopy and 6 who do not) were evaluated. Each subject viewed a 20-minute introductory video, then was tested performing 7 laparoscopic tasks (peg transfers, pattern cutting, clip and divide, endolooping, mesh placement and fixation, suturing with intracorporeal or extracorporeal knots). Performance was measured using a scoring system rewarding precision and speed. Each candidate repeated all 7 tasks and was rescored. Data were analyzed by linear regression to assess the relationship of performance with level of residency training for each task, and by ANOVA with repeated measures to test for effects of level of training, of repetition, and of the interaction between level of training and repetition on overall performance. Student's t test was used to evaluate differences between laparoscopic and nonlaparoscopic surgeons and between each of these groups and the PGY 5 level of surgical residents. Results: Significant predictors of overall performance were (a) level of training ( P = 0.002), (b) repetition ( P P = 0.001). There was also a significant interaction between level of training and the specific task on performance scores ( P = 0.006). When each task was evaluated individually for the 30 residents, 4 of the 7 tasks (tasks 1, 2, 6, 7) showed significant correlation between PGY level and score. A significant difference in performance scores between laparoscopic and nonlaparoscopic surgeons was seen for tasks 1, 2, and 6. Conclusions: A model was developed to evaluate laparoscopic skills. Construct validity was demonstrated by measuring significant improvement in performance with increasing residency training, and with practice. Further validation will require correlation of performance in the model with skill in vivo.

606 citations

Book
01 Jul 1994
TL;DR: Human Error in Medicine: A Frontier for Change is concerned with the causes and reduction of human errors in clinical practice and the consequences of these errors on patients and the profession.
Abstract: Contents: J.T. Reason, Foreword. M.S. Bogner, Introduction. L.L. Leape, The Preventability of Medical Injury. J.A. Perper, Life-Threatening and Fatal Therapeutic Misadventures. H. Van Cott, Human Errors: Their Causes and Reduction. N. Moray, Error Reduction as a Systems Problem. G. Vroman, I. Cohen, N. Volkman, Misinterpreting Cognitive Decline in the Elderly: Blaming the Patient. R.L. Klatzky, J. Geiwitz, S.C. Fischer, Using Statistics in Clinical Practice: A Gap Between Training and Application. T.B. Sheridan, J.M. Thompson, People Versus Computers in Medicine. J.W. Senders, Medical Devices, Medical Errors, and Medical Accidents. D.I. Serig, Radiopharmaceutical Misadministrations: What's Wrong? D.M. Gaba, Human Error in Dynamic Medical Domains. R.L. Helmreich, H-G. Schaefer, Team Performance in the Operating Room. R.I. Cook, D.D. Woods, Operating at the Sharp End: The Complexity of Human Error. G.P. Krueger, Fatigue, Performance, and Medical Error. W.A. Hyman, Errors in the Use of Medical Equipment. M.H. Applegate, Diagnosis-Related Groups: Are Patients in Jeopardy? M.S. Bogner, Human Error in Medicine: A Frontier for Change. J. Rasmussen, Afterword.

591 citations


"Features and uses of high-fidelity ..." refers background in this paper

  • ...Most medical errors result from problems in the systems of care rather than from individual mistakes (Bogner, 1994)....

    [...]

Journal ArticleDOI
03 Sep 1997-JAMA
TL;DR: A need to improve the teaching and assessment of cardiac auscultation during generalists' training, particularly with the advent of managed care and its search for more cost-effective uses of technology is suggested.
Abstract: Context. —Medical educators have had a growing sense that proficiency in physical diagnostic skills is waning, but few data have examined the question critically. Objective, Design, and Setting. —To compare the cardiac auscultatory proficiency of medical students and physicians in training. A multicenter cross-sectional assessment of students and house staff. A total of 8 internal medicine and 23 family practice programs of the mid-Atlantic area. Participants. —A total of 453 physicians in training and 88 medical students. Interventions. —All participants listened to 12 cardiac events directly recorded from patients, which they identified by completing a multiple-choice questionnaire. Main Outcome Measures. —Scores were expressed as the percentage of participants, for year and type of training, who correctly identified each event. Cumulative scores were expressed as the total number of events correctly recognized. An adjusted score was calculated whenever participants selected not only the correct finding but also findings that are acoustically similar and yet absent. Results. —Trainees' cumulative scores ranged between 0 and 7 for both internal medicine and family practice residents (median, 2.5 and 2.0, respectively). Internal medicine residents had the highest cumulative adjusted scores for the 6 extra sounds and for all 12 cardiac events tested ( P =.01 and.02, respectively). On average, internal medicine and family practice residents recognized 20% of all cardiac events; the number of correct identifications improved little with year of training and was not significantly higher than the number identified by medical students. Conclusions. —Both Both internal medicine and family practice trainees had a disturbingly low identification rate for 12 important and commonly encountered cardiac events. This study suggests a need to improve the teaching and assessment of cardiac auscultation during generalists' training, particularly with the advent of managed care and its search for more cost-effective uses of technology.

531 citations


"Features and uses of high-fidelity ..." refers background in this paper

  • ...That study also stressed the need for structured, supplemental strategies to improve clinical education, including the use of simulation systems for training (Mangione & Nieman, 1997)....

    [...]

  • ...1Center for Research in Medical Education, University of Miami School of Medicine, USA; 2Northwestern University Feinberg School of Medicine, USA; 3Duke University Medical Center, USA...

    [...]

Journal ArticleDOI
27 Dec 1995-JAMA
TL;DR: At the bedside or in the office, physicians should have instantaneous, up-to-date assistance from an affordable, universally available database of systematic reviews of the best evidence from clinical trials.
Abstract: WHERE SHOULD a physician look to find accurate, up-to-date information about the effectiveness of a variety of clinical interventions? At the bedside or in the office, physicians should have instantaneous, up-to-date assistance from an affordable, universally available database of systematic reviews of the best evidence from clinical trials. Unfortunately, the physician who tries to seek the best evidence is often thwarted. Textbooks and reviews are often unreliable and years out of date. 1 The searcher may find the MEDLINE database, surely one of the greatest achievements of US medicine, daunting and incomplete. Although well over 1 million clinical trials have been conducted, hundreds of thousands remain See also pp 1942 and 1962. unpublished or are hard to find and may be in various languages. In the unlikely event that the physician finds all the relevant trials of a treatment, these are rarely accompanied by any comprehensive systematic review attempting to

434 citations