scispace - formally typeset
Search or ask a question
Author

Ryan Brydges

Bio: Ryan Brydges is an academic researcher from University of Toronto. The author has contributed to research in topics: Medicine & Self-regulated learning. The author has an hindex of 33, co-authored 100 publications receiving 6547 citations. Previous affiliations of Ryan Brydges include Mayo Clinic & University of British Columbia.


Papers
More filters
Journal ArticleDOI
07 Sep 2011-JAMA
TL;DR: In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
Abstract: Context Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. Objective To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. Data Source Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Study Selection Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. Data Extraction Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. Data Synthesis From a pool of 10 903 articles, we identified 609 eligible studies enrolling 35 226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I2>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. Conclusion In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.

1,420 citations

Journal ArticleDOI
TL;DR: A systematic review of studies comparing different simulation-based interventions confirmed quantitatively the effectiveness of several instructional design features in simulation- based education.
Abstract: Background: Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included original research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I 2 4 50%). For skills outcomes, pooled effect sizes ( positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p5 0.001), 0.68 for repetitive practice (7 studies; p ¼ 0.06), 0.66 for distributed practice (6 studies; p ¼ 0.03), 0.65 for interactivity (89 studies; p5 0.001), 0.62 for multiple learning strategies (70 studies; p5 0.001), 0.52 for individualized learning (59 studies; p5 0.001), 0.45 for mastery learning (3 studies; p ¼ 0.57), 0.44 for feedback (80 studies; p5 0.001), 0.34 for longer time (23 studies; p ¼ 0.005), 0.20 for clinical variation (16 studies; p ¼ 0.24), and � 0.22 for group training (8 studies; p ¼ 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education.

518 citations

Journal ArticleDOI
TL;DR: The authors abandon the term fidelity in simulation-based health professions education and replace it with terms reflecting the underlying primary concepts of physical resemblance and functional task alignment, and make a shift away from the current emphasis on physical resemblance to a focus on functional correspondence.
Abstract: In simulation-based health professions education, the concept of simulator fidelity is usually understood as the degree to which a simulator looks, feels, and acts like a human patient. Although this can be a useful guide in designing simulators, this definition emphasizes technological advances and physical resemblance over principles of educational effectiveness. In fact, several empirical studies have shown that the degree of fidelity appears to be independent of educational effectiveness. The authors confronted these issues while conducting a recent systematic review of simulation-based health professions education, and in this Perspective they use their experience in conducting that review to examine key concepts and assumptions surrounding the topic of fidelity in simulation.Several concepts typically associated with fidelity are more useful in explaining educational effectiveness, such as transfer of learning, learner engagement, and suspension of disbelief. Given that these concepts more directly influence properties of the learning experience, the authors make the following recommendations: (1) abandon the term fidelity in simulation-based health professions education and replace it with terms reflecting the underlying primary concepts of physical resemblance and functional task alignment; (2) make a shift away from the current emphasis on physical resemblance to a focus on functional correspondence between the simulator and the applied context; and (3) focus on methods to enhance educational effectiveness using principles of transfer of learning, learner engagement, and suspension of disbelief. These recommendations clarify underlying concepts for researchers in simulation-based health professions education and will help advance this burgeoning field.

408 citations

Journal ArticleDOI
TL;DR: Kane's framework addresses concerns of multiplicity of types of validity or failure to prioritise among sources of validity evidence by emphasising key inferences as the assessment progresses from a single observation to a final decision.
Abstract: Context Assessment is central to medical education and the validation of assessments is vital to their use. Earlier validity frameworks suffer from a multiplicity of types of validity or failure to prioritise among sources of validity evidence. Kane's framework addresses both concerns by emphasising key inferences as the assessment progresses from a single observation to a final decision. Evidence evaluating these inferences is planned and presented as a validity argument. Objectives We aim to offer a practical introduction to the key concepts of Kane's framework that educators will find accessible and applicable to a wide range of assessment tools and activities. Results All assessments are ultimately intended to facilitate a defensible decision about the person being assessed. Validation is the process of collecting and interpreting evidence to support that decision. Rigorous validation involves articulating the claims and assumptions associated with the proposed decision (the interpretation/use argument), empirically testing these assumptions, and organising evidence into a coherent validity argument. Kane identifies four inferences in the validity argument: Scoring (translating an observation into one or more scores); Generalisation (using the score[s] as a reflection of performance in a test setting); Extrapolation (using the score[s] as a reflection of real-world performance), and Implications (applying the score[s] to inform a decision or action). Evidence should be collected to support each of these inferences and should focus on the most questionable assumptions in the chain of inference. Key assumptions (and needed evidence) vary depending on the assessment's intended use or associated decision. Kane's framework applies to quantitative and qualitative assessments, and to individual tests and programmes of assessment. Conclusions Validation focuses on evaluating the key claims, assumptions and inferences that link assessment scores with their intended interpretations and uses. The Implications and associated decisions are the most important inferences in the validity argument.

355 citations

Journal ArticleDOI
01 Feb 2013-Surgery
TL;DR: The quantity and quality of studies that contain an economic analysis of simulation-based medical education for the training of health professions learners are summarized and a comprehensive model for accounting and reporting costs in SBME is proposed.

288 citations


Cited by
More filters
20 Jan 2017
TL;DR: The Grounded Theory: A Practical Guide through Qualitative Analysis as mentioned in this paper, a practical guide through qualitative analysis through quantitative analysis, is a good starting point for such a study.
Abstract: การวจยเชงคณภาพ เปนเครองมอสำคญอยางหนงสำหรบทำความเขาใจสงคมและพฤตกรรมมนษย การวจยแบบการสรางทฤษฎจากขอมล กเปนหนงในหลายระเบยบวธการวจยเชงคณภาพทกำลงไดรบความสนใจ และเปนทนยมเพมสงขนเรอยๆ จากนกวชาการ และนกวจยในสาขาสงคมศาสตร และศาสตรอนๆ เชน พฤตกรรมศาสตร สงคมวทยา สาธารณสขศาสตร พยาบาลศาสตร จตวทยาสงคม ศกษาศาสตร รฐศาสตร และสารสนเทศศกษา ดงนน หนงสอเรอง “ConstructingGrounded Theory: A Practical Guide through Qualitative Analysis” หรอ “การสรางทฤษฎจากขอมล:แนวทางการปฏบตผานการวเคราะหเชงคณภาพ” จะชวยใหผอานมความรความเขาใจถงพฒนาการของปฏบตการวจยแบบสรางทฤษฎจากขอมล ตลอดจนแนวทาง และกระบวนการปฏบตการวจยอยางเปนระบบ จงเปนหนงสอทควรคาแกการอานโดยเฉพาะนกวจยรนใหม เพอเปนแนวทางในการนำความรความเขาใจไประยกตในงานวจยของตน อกทงนกวจยผเชยวชาญสามารถอานเพอขยายมโนทศนดานวจยใหกวางขวางขน

4,417 citations

Journal Article

4,293 citations

01 Jan 2006
TL;DR: For example, Standardi pružaju okvir koje ukazuju na ucinkovitost kvalitetnih instrumenata u onim situacijama u kojima je njihovo koristenje potkrijepljeno validacijskim podacima.
Abstract: Pedagosko i psiholosko testiranje i procjenjivanje spadaju među najvažnije doprinose znanosti o ponasanju nasem drustvu i pružaju temeljna i znacajna poboljsanja u odnosu na ranije postupke. Iako se ne može ustvrditi da su svi testovi dovoljno usavrseni niti da su sva testiranja razborita i korisna, postoji velika kolicina informacija koje ukazuju na ucinkovitost kvalitetnih instrumenata u onim situacijama u kojima je njihovo koristenje potkrijepljeno validacijskim podacima. Pravilna upotreba testova može dovesti do boljih odluka o pojedincima i programima nego sto bi to bio slucaj bez njihovog koristenja, a također i ukazati na put za siri i pravedniji pristup obrazovanju i zaposljavanju. Međutim, losa upotreba testova može dovesti do zamjetne stete nanesene ispitanicima i drugim sudionicima u procesu donosenja odluka na temelju testovnih podataka. Cilj Standarda je promoviranje kvalitetne i eticne upotrebe testova te uspostavljanje osnovice za ocjenu kvalitete postupaka testiranja. Svrha objavljivanja Standarda je uspostavljanje kriterija za evaluaciju testova, provedbe testiranja i posljedica upotrebe testova. Iako bi evaluacija prikladnosti testa ili njegove primjene trebala ovisiti prvenstveno o strucnim misljenjima, Standardi pružaju okvir koji osigurava obuhvacanje svih relevantnih pitanja. Bilo bi poželjno da svi autori, sponzori, nakladnici i korisnici profesionalnih testova usvoje Standarde te da poticu druge da ih također prihvate.

3,905 citations

Journal ArticleDOI
TL;DR: Doing qualitative research: a practical handbook, by David Silverman, Los Angeles, Sage, 2010, 456 pp., AU$65.00, ISBN 978-1-84860-033-1, ISBN 1-94960-034-8 as mentioned in this paper.
Abstract: Doing qualitative research: a practical handbook, by David Silverman, Los Angeles, Sage, 2010, 456 pp., AU$65.00, ISBN 978-1-84860-033-1, ISBN 978-1-94960-034-8. Available in Australia and New Zeal...

2,295 citations