scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Translating Evidence Updates to International Standards: Is More Certainty Needed for International Standards on Decision Aids?

01 Jan 2022-Medical Decision Making (SAGE Publications)-Vol. 42, Iss: 1, pp 3-7
About: This article is published in Medical Decision Making.The article was published on 2022-01-01 and is currently open access. It has received 2 citations till now. The article focuses on the topics: Decision aids & Certainty.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors explore the evidence-translator's experience of the expert-recommended process of translating guidelines into tools for decision making, action, and adherence with the goal of improvement.

2 citations

Journal ArticleDOI
TL;DR: In this paper , the authors proposed an online English teaching system method based on Internet of Things technology, which can effectively improve students' English performance and allow students to better control their own progress.
Abstract: In order to better improve students’ English performance and adapt to the progress in the age of science and technology faster, the author proposes an online English teaching system method based on Internet of Things technology. The author studies the English SPOC teaching mode and constructs a multimedia teaching system based on the Internet of Things technology, improve the teaching system, and improve and learn the teaching mode, to achieve the improvement of the quality of English teaching. Experimental results show that under the author’s method, students’ scores on both the written and oral exams are about 10 points higher than those in the traditional teaching method. Conclusion. The online English teaching system based on Internet of Things technology can effectively improve students’ English performance and allow students to better control their own progress.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: The outcomes of the 12 large randomized, controlled trials that were studied were not predicted accurately 35 percent of the time by the meta-analyses published previously on the same topics.
Abstract: Background Meta-analyses are now widely used to provide evidence to support clinical strategies. However, large randomized, controlled trials are considered the gold standard in evaluating the efficacy of clinical interventions. Methods We compared the results of large randomized, controlled trials (involving 1000 patients or more) that were published in four journals (the New England Journal of Medicine, the Lancet, the Annals of Internal Medicine, and the Journal of the American Medical Association) with the results of meta-analyses published earlier on the same topics. Regarding the principal and secondary outcomes, we judged whether the findings of the randomized trials agreed with those of the corresponding meta-analyses, and we determined whether the study results were positive (indicating that treatment improved the outcome) or negative (indicating that the outcome with treatment was the same or worse than without it) at the conventional level of statistical significance (P<0.05). Results We identi...

1,146 citations

Journal ArticleDOI
15 Aug 2001-JAMA
TL;DR: Despite good correlation between randomized trials and nonrandomized studies-in particular, prospective studies-discrepancies beyond chance do occur and differences in estimated magnitude of treatment effect are very common.
Abstract: ContextThere is substantial debate about whether the results of nonrandomized studies are consistent with the results of randomized controlled trials on the same topic.ObjectivesTo compare results of randomized and nonrandomized studies that evaluated medical interventions and to examine characteristics that may explain discrepancies between randomized and nonrandomized studies.Data SourcesMEDLINE (1966–March 2000), the Cochrane Library (Issue 3, 2000), and major journals were searched.Study SelectionForty-five diverse topics were identified for which both randomized trials (n = 240) and nonrandomized studies (n = 168) had been performed and had been considered in meta-analyses of binary outcomes.Data ExtractionData on events per patient in each study arm and design and characteristics of each study considered in each meta-analysis were extracted and synthesized separately for randomized and nonrandomized studies.Data SynthesisVery good correlation was observed between the summary odds ratios of randomized and nonrandomized studies (r = 0.75; P<.001); however, nonrandomized studies tended to show larger treatment effects (28 vs 11; P = .009). Between-study heterogeneity was frequent among randomized trials alone (23%) and very frequent among nonrandomized studies alone (41%). The summary results of the 2 types of designs differed beyond chance in 7 cases (16%). Discrepancies beyond chance were less common when only prospective studies were considered (8%). Occasional differences in sample size and timing of publication were also noted between discrepant randomized and nonrandomized studies. In 28 cases (62%), the natural logarithm of the odds ratio differed by at least 50%, and in 15 cases (33%), the odds ratio varied at least 2-fold between nonrandomized studies and randomized trials.ConclusionsDespite good correlation between randomized trials and nonrandomized studies—in particular, prospective studies—discrepancies beyond chance do occur and differences in estimated magnitude of treatment effect are very common.

805 citations

Journal ArticleDOI
TL;DR: G-I-N's proposed set of key components address panel composition, decision-making process, conflicts of interest, guideline objective, development methods, evidence review, basis of recommendations, ratings of evidence and recommendations, guideline review, updating processes, and funding.
Abstract: Guideline development processes vary substantially, and many guidelines do not meet basic quality criteria. Standards for guideline development can help organizations ensure that recommendations are evidence-based and can help users identify high-quality guidelines. Such organizations as the U.S. Institute of Medicine and the United Kingdom's National Institute for Health and Clinical Excellence have developed recommendations to define trustworthy guidelines within their locales. Many groups charged with guideline development find the lengthy list of standards developed by such organizations to be aspirational but infeasible to follow in entirety. Founded in 2002, the Guidelines International Network (G-I-N) is a network of guideline developers that includes 93 organizations and 89 individual members representing 46 countries. The G-I-N board of trustees recognized the importance of guideline development processes that are both rigorous and feasible even for modestly funded groups to implement and initiated an effort toward consensus about minimum standards for high-quality guidelines. In contrast to other existing standards for guideline development at national or local levels, the key components proposed by G-I-N will represent the consensus of an international, multidisciplinary group of active guideline developers. This article presents G-I-N's proposed set of key components for guideline development. These key components address panel composition, decision-making process, conflicts of interest, guideline objective, development methods, evidence review, basis of recommendations, ratings of evidence and recommendations, guideline review, updating processes, and funding. It is hoped that this article promotes discussion and eventual agreement on a set of international standards for guideline development.

635 citations

Journal ArticleDOI
TL;DR: Estimating net benefit was critical in the final USPSTF recommendation that clinicians not provide carotid artery stenosis screening in asymptomatic people, and the process by which it evaluates evidence, determines the certainty and magnitude of net benefit, is updated.
Abstract: The major goal of the U.S. Preventive Services Task Force (USPSTF) is to provide a reliable and accurate source of evidence-based recommendations on a wide range of preventive services. In this article, the USPSTF updates and reviews the process by which it evaluates evidence, determines the certainty and magnitude of net benefit, and gives a final letter grade to recommendations. Because direct evidence about prevention is often unavailable, the Task Force usually considers indirect evidence. To guide its selection of indirect evidence, a "chain of evidence" is constructed within an analytic framework. The Task Force examines evidence of various research designs that addresses the key questions within the framework. New terms have been added to describe the USPSTF's judgment about the evidence for each key question: "convincing," "adequate," or "inadequate." For increased clarity, the USPSTF has changed its description of overall evidence of net benefit for the preventive service from "good," "fair," or "poor" quality to "high," "moderate," or "low" certainty. This rating considers the extent to which an uninterrupted chain of evidence exists across the analytic framework. Individual studies will continue to be judged as being of "good," "fair," or "poor" quality. Using outcomes tables, the USPSTF estimates the magnitude of benefits and the magnitude of harms, and synthesizes them into an estimate of the magnitude of net benefit. Although some judgment is required at all steps, the USPSTF strives to make the process as explicit and transparent as possible. The USPSTF anticipates that its methods for making evidence-based recommendations will continue to evolve.

176 citations