scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Internet-based learning in the health professions: a meta-analysis.

10 Sep 2008-JAMA (American Medical Association)-Vol. 300, Iss: 10, pp 1181-1196
TL;DR: Internet-based learning is associated with large positive effects compared with no intervention and with non-Internet instructional methods, suggesting effectiveness similar to traditional methods.
Abstract: Context The increasing use of Internet-based learning in health professions education may be informed by a timely, comprehensive synthesis of evidence of effectiveness. Objectives To summarize the effect of Internet-based instruction for health professions learners compared with no intervention and with non-Internet interventions. Data Sources Systematic search of MEDLINE, Scopus, CINAHL, EMBASE, ERIC, TimeLit, Web of Science, Dissertation Abstracts, and the University of Toronto Research and Development Resource Base from 1990 through 2007. Study Selection Studies in any language quantifying the association of Internet-based instruction and educational outcomes for practicing and student physicians, nurses, pharmacists, dentists, and other health care professionals compared with a no-intervention or non-Internet control group or a preintervention assessment. Data Extraction Two reviewers independently evaluated study quality and abstracted information including characteristics of learners, learning setting, and intervention (including level of interactivity, practice exercises, online discussion, and duration). Data Synthesis There were 201 eligible studies. Heterogeneity in results across studies was large (I2 ≥ 79%) in all analyses. Effect sizes were pooled using a random effects model. The pooled effect size in comparison to no intervention favored Internet-based interventions and was 1.00 (95% confidence interval [CI], 0.90-1.10; P Conclusions Internet-based learning is associated with large positive effects compared with no intervention. In contrast, effects compared with non-Internet instructional methods are heterogeneous and generally small, suggesting effectiveness similar to traditional methods. Future research should directly compare different Internet-based interventions.
Citations
More filters
Journal ArticleDOI
07 Sep 2011-JAMA
TL;DR: In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
Abstract: Context Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. Objective To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. Data Source Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Study Selection Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. Data Extraction Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. Data Synthesis From a pool of 10 903 articles, we identified 609 eligible studies enrolling 35 226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I2>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. Conclusion In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.

1,420 citations

Journal ArticleDOI
TL;DR: Social media use in medical education is an emerging field of scholarship that merits further investigation and educators face challenges in adapting new technologies, but they also have opportunities for innovation.
Abstract: PurposeThe authors conducted a systematic review of the published literature on social media use in medical education to answer two questions: (1) How have interventions using social media tools affected outcomes of satisfaction, knowledge, attitudes, and skills for physicians and physicians-in-trai

532 citations

Journal ArticleDOI
TL;DR: A systematic review of studies comparing different simulation-based interventions confirmed quantitatively the effectiveness of several instructional design features in simulation- based education.
Abstract: Background: Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included original research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I 2 4 50%). For skills outcomes, pooled effect sizes ( positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p5 0.001), 0.68 for repetitive practice (7 studies; p ¼ 0.06), 0.66 for distributed practice (6 studies; p ¼ 0.03), 0.65 for interactivity (89 studies; p5 0.001), 0.62 for multiple learning strategies (70 studies; p5 0.001), 0.52 for individualized learning (59 studies; p5 0.001), 0.45 for mastery learning (3 studies; p ¼ 0.57), 0.44 for feedback (80 studies; p5 0.001), 0.34 for longer time (23 studies; p ¼ 0.005), 0.20 for clinical variation (16 studies; p ¼ 0.24), and � 0.22 for group training (8 studies; p ¼ 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education.

518 citations


Cites background or methods from "Internet-based learning in the heal..."

  • ...…Research Study Quality Instrument (Reed et al. 2007) and an adaptation of the Newcastle-Ottawa scale for cohort studies (Wells et al. 2007; Cook et al. 2008b) that evaluates representativeness of the intervention group (ICC, 0.68), selection of the comparison group (ICC, 0.26 with raw…...

    [...]

  • ...2007) and an adaptation of the Newcastle-Ottawa scale for cohort studies (Wells et al. 2007; Cook et al. 2008b) that evaluates representativeness of the intervention group (ICC, 0....

    [...]

  • ...Those responsible for funding decisions must recognize the importance of theory-building research that clarifies (Cook et al. 2008a) the modalities and features of simulation-based education that improve learner and patient outcomes with greatest effectiveness and at lowest cost....

    [...]

  • ...Our findings of small to moderate effects favoring theorypredicted instructional design features parallel the findings of a review of Internet-based instruction (Cook et al. 2008b)....

    [...]

Journal ArticleDOI
TL;DR: Virtual patients (VPs), which take the form of interactive computer‐based clinical scenarios, may help to reconcile the paradox of increased training expectations and reduced training resources.
Abstract: CONTEXT The opposing forces of increased training expectations and reduced training resources have greatly impacted health professions education. Virtual patients (VPs), which take the form of interactive computer-based clinical scenarios, may help to reconcile this paradox. METHODS We summarise research on VPs, highlight the spectrum of potential variation and identify an agenda for future research. We also critically consider the role of VPs in the educational armamentarium. RESULTS We propose that VPs' most unique and cost-effective function is to facilitate and assess the development of clinical reasoning. Clinical reasoning in experts involves a non-analytical process that matures through deliberate practice with multiple and varied clinical cases. Virtual patients are ideally suited to this task. Virtual patients can also be used in learner assessment, but scoring rubrics should emphasise non-analytical clinical reasoning rather than completeness of information or algorithmic approaches. Potential variations in VP design are practically limitless, yet few studies have rigorously explored design issues. More research is needed to inform instructional design and curricular integration. CONCLUSIONS Virtual patients should be designed and used to promote clinical reasoning skills. More research is needed to inform how to effectively use VPs.

506 citations

Journal ArticleDOI
TL;DR: The results indicate that, in terms of achievement outcomes, BL conditions exceed CI conditions by about one-third of a standard deviation, and how this line of research can improve pedagogy and student achievement is explored.
Abstract: This paper serves several purposes. First and foremost, it is devoted to developing a better understanding of the effectiveness of blended learning (BL) in higher education. This is achieved through a meta-analysis of a sub-collection of comparative studies of BL and classroom instruction (CI) from a larger systematic review of technology integration (Schmid et al. in Comput Educ 72:271–291, 2014). In addition, the methodology of meta-analysis is described and illustrated by examples from the current study. The paper begins with a summary of the experimental research on distance education (DE) and online learning (OL), encapsulated in meta-analyses that have been conducted since 1990. Then it introduces the Bernard et al. (Rev Educ Res 74(3):379–439, 2009) meta-analysis, which attempted to alter the DE research culture of always comparing DE/OL with CI by examining three forms of interaction treatments (i.e., student–student, student–teacher, student–content) within DE, using the theoretical framework of Moore (Am J Distance Educ 3(2):1–6, 1989) and Anderson (Rev Res Open Distance Learn 4(2):9–14, 2003). The rest of the paper revolves around the general steps and procedures (Cooper in Research synthesis and meta-analysis: a step-by-step approach, 4th edn, SAGE, Los Angeles, CA, 2010) involved in conducting a meta-analysis. This section is included to provide researchers with an overview of precisely how meta-analyses can be used to respond to more nuanced questions that speak to underlying theory and inform practice—in other words, not just answers to the “big questions.” In this instance, we know that technology has an overall positive impact on learning (g + = +0.35, p < .01, Tamim et al. in Rev Educ Res 81(3):4–28, 2011), but the sub-questions addressed here concern BL interacting with technology in higher education. The results indicate that, in terms of achievement outcomes, BL conditions exceed CI conditions by about one-third of a standard deviation (g + = 0.334, k = 117, p < .001) and that the kind of computer support used (i.e., cognitive support vs. content/presentational support) and the presence of one or more interaction treatments (e.g., student–student/–teacher/–content interaction) serve to enhance student achievement. We examine the empirical studies that yielded these outcomes, work through the methodology that enables evidence-based decision-making, and explore how this line of research can improve pedagogy and student achievement.

496 citations

References
More filters
Book
01 Dec 1969
TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Abstract: Contents: Prefaces. The Concepts of Power Analysis. The t-Test for Means. The Significance of a Product Moment rs (subscript s). Differences Between Correlation Coefficients. The Test That a Proportion is .50 and the Sign Test. Differences Between Proportions. Chi-Square Tests for Goodness of Fit and Contingency Tables. The Analysis of Variance and Covariance. Multiple Regression and Correlation Analysis. Set Correlation and Multivariate Methods. Some Issues in Power Analysis. Computational Procedures.

115,069 citations


"Internet-based learning in the heal..." refers background in this paper

  • ...The pooled estimate of effect size was large across all educational outcomes.(40) Furthermore, we found a moderate or large effect for nearly all subgroup analyses exploring variations in learning setting, instructional design, study design, and study quality....

    [...]

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: In this article, the authors present guidelines for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges, and the confidence intervals for each of the forms are reviewed.
Abstract: Reliability coefficients often take the form of intraclass correlation coefficients. In this article, guidelines are given for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges. Relevant to the choice of the coefficient are the appropriate statistical model for the reliability and the application to be made of the reliability results. Confidence intervals for each of the forms are reviewed.

21,185 citations

Journal ArticleDOI
19 Apr 2000-JAMA
TL;DR: A checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion should improve the usefulness ofMeta-an analyses for authors, reviewers, editors, readers, and decision makers.
Abstract: ObjectiveBecause of the pressure for timely, informed decisions in public health and clinical practice and the explosion of information in the scientific literature, research results must be synthesized. Meta-analyses are increasingly used to address this problem, and they often evaluate observational studies. A workshop was held in Atlanta, Ga, in April 1997, to examine the reporting of meta-analyses of observational studies and to make recommendations to aid authors, reviewers, editors, and readers.ParticipantsTwenty-seven participants were selected by a steering committee, based on expertise in clinical practice, trials, statistics, epidemiology, social sciences, and biomedical editing. Deliberations of the workshop were open to other interested scientists. Funding for this activity was provided by the Centers for Disease Control and Prevention.EvidenceWe conducted a systematic review of the published literature on the conduct and reporting of meta-analyses in observational studies using MEDLINE, Educational Research Information Center (ERIC), PsycLIT, and the Current Index to Statistics. We also examined reference lists of the 32 studies retrieved and contacted experts in the field. Participants were assigned to small-group discussions on the subjects of bias, searching and abstracting, heterogeneity, study categorization, and statistical methods.Consensus ProcessFrom the material presented at the workshop, the authors developed a checklist summarizing recommendations for reporting meta-analyses of observational studies. The checklist and supporting evidence were circulated to all conference attendees and additional experts. All suggestions for revisions were addressed.ConclusionsThe proposed checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion. Use of the checklist should improve the usefulness of meta-analyses for authors, reviewers, editors, readers, and decision makers. An evaluation plan is suggested and research areas are explored.

17,663 citations


"Internet-based learning in the heal..." refers methods in this paper

  • ...These reviews were planned, conducted, and reported in adherence to standards of quality for reporting metaanalyses (Quality of Reporting of Metaanalyses and Meta-analysis of Observational Studies in Epidemiology standards).(18,19)...

    [...]