scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Systematic review of randomized controlled trials on the effectiveness of virtual reality training for laparoscopic surgery.

01 Sep 2008-British Journal of Surgery (John Wiley & Sons, Ltd)-Vol. 95, Iss: 9, pp 1088-1097
TL;DR: The aim of this review was to determine whether virtual reality (VR) training can supplement and/or replace conventional laparoscopic training in surgical trainees with limited or no Laparoscopic experience.
Abstract: Background: Surgical training has traditionally been one of apprenticeship. The aim of this review was to determine whether virtual reality (VR) training can supplement and/or replace conventional laparoscopic training in surgical trainees with limited or no laparoscopic experience. Methods: Randomized clinical trials addressing this issue were identified from The Cochrane Library trials register, Medline, Embase, Science Citation Index Expanded, grey literature and reference lists. Standardized mean difference was calculated with 95 per cent confidence intervals based on available case analysis. Results: Twenty-three trials (mostly with a high risk of bias) involving 622 participants were included in this review. In trainees without surgical experience, VR training decreased the time taken to complete a task, increased accuracy and decreased errors compared with no training. In the same participants, VR training was more accurate than video trainer (VT) training. In participants with limited laparoscopic experience, VR training resulted in a greater reduction in operating time, error and unnecessary movements than standard laparoscopic training. In these participants, the composite performance score was better in the VR group than the VT group. Conclusion: VR training can supplement standard laparoscopic surgical training. It is at least as effective as video training in supplementing standard laparoscopic training.
Citations
More filters
Journal ArticleDOI
07 Sep 2011-JAMA
TL;DR: In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
Abstract: Context Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. Objective To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. Data Source Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Study Selection Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. Data Extraction Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. Data Synthesis From a pool of 10 903 articles, we identified 609 eligible studies enrolling 35 226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I2>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. Conclusion In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.

1,420 citations

Journal ArticleDOI
TL;DR: In this paper, the authors implemented a comprehensive checklist for surgical complications and mortality in hospitals with a high standard of care, which was associated with a reduction in surgical complications in hospitals.
Abstract: In a comparison of 3760 patients observed before implementation of the checklist with 3820 patients observed after implementation, the total number of complications per 100 patients decreased from 27.3 (95% confidence interval [CI], 25.9 to 28.7) to 16.7 (95% CI, 15.6 to 17.9), for an absolute risk reduction of 10.6 (95% CI, 8.7 to 12.4). The proportion of patients with one or more complications decreased from 15.4% to 10.6% (P<0.001). In-hospital mortality decreased from 1.5% (95% CI, 1.2 to 2.0) to 0.8% (95% CI, 0.6 to 1.1), for an absolute risk reduction of 0.7 percentage points (95% CI, 0.2 to 1.2). Outcomes did not change in the control hospitals. Conclusions Implementation of this comprehensive checklist was associated with a reduction in surgical complications and mortality in hospitals with a high standard of care. (Netherlands Trial Register number, NTR1943.)

905 citations

Journal ArticleDOI
TL;DR: A systematic review of studies comparing different simulation-based interventions confirmed quantitatively the effectiveness of several instructional design features in simulation- based education.
Abstract: Background: Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included original research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I 2 4 50%). For skills outcomes, pooled effect sizes ( positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p5 0.001), 0.68 for repetitive practice (7 studies; p ¼ 0.06), 0.66 for distributed practice (6 studies; p ¼ 0.03), 0.65 for interactivity (89 studies; p5 0.001), 0.62 for multiple learning strategies (70 studies; p5 0.001), 0.52 for individualized learning (59 studies; p5 0.001), 0.45 for mastery learning (3 studies; p ¼ 0.57), 0.44 for feedback (80 studies; p5 0.001), 0.34 for longer time (23 studies; p ¼ 0.005), 0.20 for clinical variation (16 studies; p ¼ 0.24), and � 0.22 for group training (8 studies; p ¼ 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education.

518 citations


Cites background from "Systematic review of randomized con..."

  • ...…several other reviews have addressed simulation in general (Issenberg et al. 2005; McGaghie et al. 2010) or in comparison with no intervention, (Gurusamy et al. 2008; McGaghie et al. 2011), we are not aware of previous reviews focused on comparisons of different technologyenhanced simulation…...

    [...]

  • ...2010) or in comparison with no intervention, (Gurusamy et al. 2008; McGaghie et al. 2011), we are not aware of previous reviews focused on comparisons of different technologyenhanced simulation interventions or instructional designs....

    [...]

Journal ArticleDOI
TL;DR: Comparisons of different virtual patient designs suggest that repetition until demonstration of mastery, advance organizers, enhanced feedback, and explicitly contrasting cases can improve learning outcomes.
Abstract: PurposeEducators increasingly use virtual patients (computerized clinical case simulations) in health professions training. The authors summarize the effect of virtual patients compared with no intervention and alternate instructional methods, and elucidate features of effective virtual pati

421 citations

Journal ArticleDOI
TL;DR: The results showed that the operative performance in the virtual reality group was significantly better than the control group and the results became non-significant when the random-effects model was used, and two trials that could not be included in the meta-analysis showed a reduction in operating time.
Abstract: Background Standard surgical training has traditionally been one of apprenticeship, where the surgical trainee learns to perform surgery under the supervision of a trained surgeon. This is time-consuming, costly, and of variable effectiveness. Training using a virtual reality simulator is an option to supplement standard training. Virtual reality training improves the technical skills of surgical trainees such as decreased time for suturing and improved accuracy. The clinical impact of virtual reality training is not known. Objectives To assess the benefits (increased surgical proficiency and improved patient outcomes) and harms (potentially worse patient outcomes) of supplementary virtual reality training of surgical trainees with limited laparoscopic experience. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE and Science Citation Index Expanded until July 2012. Selection criteria We included all randomised clinical trials comparing virtual reality training versus other forms of training including box-trainer training, no training, or standard laparoscopic training in surgical trainees with little laparoscopic experience. We also planned to include trials comparing different methods of virtual reality training. We included only trials that assessed the outcomes in people undergoing laparoscopic surgery. Data collection and analysis Two authors independently identified trials and collected data. We analysed the data with both the fixed-effect and the random-effects models using Review Manager 5 analysis. For each outcome we calculated the mean difference (MD) or standardised mean difference (SMD) with 95% confidence intervals based on intention-to-treat analysis. Main results We included eight trials covering 109 surgical trainees with limited laparoscopic experience. Of the eight trials, six compared virtual reality versus no supplementary training. One trial compared virtual reality training versus box-trainer training and versus no supplementary training, and one trial compared virtual reality training versus box-trainer training. There were no trials that compared different forms of virtual reality training. All the trials were at high risk of bias. Operating time and operative performance were the only outcomes reported in the trials. The remaining outcomes such as mortality, morbidity, quality of life (the primary outcomes of this review) and hospital stay (a secondary outcome) were not reported. Virtual reality training versus no supplementary training: The operating time was significantly shorter in the virtual reality group than in the no supplementary training group (3 trials; 49 participants; MD -11.76 minutes; 95% CI -15.23 to -8.30). Two trials that could not be included in the meta-analysis also showed a reduction in operating time (statistically significant in one trial). The numerical values for operating time were not reported in these two trials. The operative performance was significantly better in the virtual reality group than the no supplementary training group using the fixed-effect model (2 trials; 33 participants; SMD 1.65; 95% CI 0.72 to 2.58). The results became non-significant when the random-effects model was used (2 trials; 33 participants; SMD 2.14; 95% CI -1.29 to 5.57). One trial could not be included in the meta-analysis as it did not report the numerical values. The authors stated that the operative performance of virtual reality group was significantly better than the control group. Virtual reality training versus box-trainer training: The only trial that reported operating time did not report the numerical values. In this trial, the operating time in the virtual reality group was significantly shorter than in the box-trainer group. Of the two trials that reported operative performance, only one trial reported the numerical values. The operative performance was significantly better in the virtual reality group than in the box-trainer group (1 trial; 19 participants; SMD 1.46; 95% CI 0.42 to 2.50). In the other trial that did not report the numerical values, the authors stated that the operative performance in the virtual reality group was significantly better than the box-trainer group. Authors' conclusions Virtual reality training appears to decrease the operating time and improve the operative performance of surgical trainees with limited laparoscopic experience when compared with no training or with box-trainer training. However, the impact of this decreased operating time and improvement in operative performance on patients and healthcare funders in terms of improved outcomes or decreased costs is not known. Further well-designed trials at low risk of bias and random errors are necessary. Such trials should assess the impact of virtual reality training on clinical outcomes.

421 citations

References
More filters
Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure

37,989 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations

Journal ArticleDOI
TL;DR: It is concluded that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity, and one or both should be presented in publishedMeta-an analyses in preference to the test for heterogeneity.
Abstract: The extent of heterogeneity in a meta-analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between-study variance, but interpretation is then specific to a particular treatment effect metric. A test for the existence of heterogeneity exists, but depends on the number of studies in the meta-analysis. We develop measures of the impact of heterogeneity on a meta-analysis, from mathematical criteria, that are independent of the number of studies and the treatment effect metric. We derive and propose three suitable statistics: H is the square root of the chi2 heterogeneity statistic divided by its degrees of freedom; R is the ratio of the standard error of the underlying mean from a random effects meta-analysis to the standard error of a fixed effect meta-analytic estimate, and I2 is a transformation of (H) that describes the proportion of total variation in study estimates that is due to heterogeneity. We discuss interpretation, interval estimates and other properties of these measures and examine them in five example data sets showing different amounts of heterogeneity. We conclude that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity. One or both should be presented in published meta-analyses in preference to the test for heterogeneity.

25,460 citations

Journal ArticleDOI
01 Feb 1995-JAMA
TL;DR: Empirical evidence is provided that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias.
Abstract: Objective. —To determine if inadequate approaches to randomized controlled trial design and execution are associated with evidence of bias in estimating treatment effects. Design. —An observational study in which we assessed the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzed, using multiple logistic regression models, the associations between those assessments and estimated treatment effects. Data Sources. —Meta-analyses from the Cochrane Pregnancy and Childbirth Database. Main Outcome Measures. —The associations between estimates of treatment effects and inadequate allocation concealment, exclusions after randomization, and lack of double-blinding. Results. —Compared with trials in which authors reported adequately concealed treatment allocation, trials in which concealment was either inadequate or unclear (did not report or incompletely reported a concealment approach) yielded larger estimates of treatment effects ( P P =.01), with odds ratios being exaggerated by 17%. Conclusions. —This study provides empirical evidence that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias. Readers of trial reports should be wary of these pitfalls, and investigators must improve their design, execution, and reporting of trials. ( JAMA . 1995;273:408-412)

5,765 citations

Journal ArticleDOI
TL;DR: Study of low methodological quality in which the estimate of quality is incorporated into the meta-analyses can alter the interpretation of the benefit of intervention, whether a scale or component approach is used in the assessment of trial quality.

3,129 citations

Related Papers (5)