scispace - formally typeset
Search or ask a question
Author

Laura McAuley

Bio: Laura McAuley is an academic researcher from Children's Hospital of Eastern Ontario. The author has contributed to research in topics: Verification bias & Vascular surgery. The author has an hindex of 8, co-authored 8 publications receiving 965 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Whether exclusion of grey literature, compared with its inclusion in meta-analysis, provides different estimates of the effectiveness of interventions assessed in randomised trials is examined.

709 citations

Journal ArticleDOI
TL;DR: Most physicians in all countries viewed decision rules as intended to improve the quality of health care, a convenient source of advice, and good educational tools, and those from the United States held the least positive attitudes toward decision rules.

120 citations

Journal ArticleDOI
TL;DR: It is shown that there are differences in the conclusions one would reach clinically based on the different analytical approaches dealing with publication bias, and the appropriate use of these methods improves the reliability and accuracy of meta-analysis.
Abstract: Using 14 meta-analyses that included both published (n = 199) and unpublished (n = 50) randomized trials, we evaluated the utility of different analytical approaches to detect, assess robustness, a

66 citations

Journal ArticleDOI
01 Nov 2000-Oncology
TL;DR: Canadian oncologists were quite positive about practice guidelines and reported using them frequently, and use was associated with positive attitudes about guidelines, receiving medical school training abroad and being a radiation oncologist.
Abstract: Purpose: To determine (1) Canadian oncologists’ attitudes toward practice guidelines, (2) oncologists’ self-reported use of practice guidelines and, (3) physicians’ characteristics

43 citations

Journal Article
TL;DR: More rational and evidence-based use of blood-sparing methods could be promoted by the adoption of an interdisciplinary, comprehensive, coordinated approach tailored to each patient's needs.
Abstract: o2, avril 2002 Objective: To identify and describe the factors influencing the use and nonuse of blood-sparing methods such as preoperative autologous donation, acute normovolemic hemodilution, and the use of cell salvage devices, hemostatic agents and erythropoietin. Design: An interview survey. Setting: Eight Ontario hospitals. Method: Interviews were conducted with chiefs of surgery, orthopedics, cardiac surgery and anesthesia, and with heads of transfusion medicine and pharmacy. Hospitals were selected using the qualitative sampling strategy of maximum variation based on their use of the methods (as reported in a previous mail survey). Results: Use of blood-sparing methods was influenced by diverse factors often operating simultaneously. These included the following: characteristics of the method (e.g., evidence of its effectiveness, ease of use, cost); perceptions and experiences of the potential adopters (experience with the method, perception of the current safety of allogeneic blood, perceived convenience or inconvenience of using the method); aspects of the practice setting (inability to move resources between hospital departments, presence of a local clinical champion); and the external environment (patient and public expectations, funding of the blood system, blood shortages). Interpretation: More rational and evidence-based use of blood-sparing methods could be promoted by the adoption of an interdiscipli

31 citations


Cited by
More filters
Book
23 Sep 2019
TL;DR: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
Abstract: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.

21,235 citations

Journal ArticleDOI
TL;DR: A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed that consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews.
Abstract: Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use.

3,583 citations

Journal ArticleDOI
TL;DR: The inability of case-mix adjustment methods to compensate for selection bias and the inability to identify non- randomised studies that are free of selection bias indicate that non-randomised studies should only be undertaken when RCTs are infeasible or unethical.
Abstract: OBJECTIVES: To consider methods and related evidence for evaluating bias in non-randomised intervention studies. DATA SOURCES: Systematic reviews and methodological papers were identified from a search of electronic databases; handsearches of key medical journals and contact with experts working in the field. New empirical studies were conducted using data from two large randomised clinical trials. METHODS: Three systematic reviews and new empirical investigations were conducted. The reviews considered, in regard to non-randomised studies, (1) the existing evidence of bias, (2) the content of quality assessment tools, (3) the ways that study quality has been assessed and addressed. (4) The empirical investigations were conducted generating non-randomised studies from two large, multicentre randomised controlled trials (RCTs) and selectively resampling trial participants according to allocated treatment, centre and period. RESULTS: In the systematic reviews, eight studies compared results of randomised and non-randomised studies across multiple interventions using meta-epidemiological techniques. A total of 194 tools were identified that could be or had been used to assess non-randomised studies. Sixty tools covered at least five of six pre-specified internal validity domains. Fourteen tools covered three of four core items of particular importance for non-randomised studies. Six tools were thought suitable for use in systematic reviews. Of 511 systematic reviews that included non-randomised studies, only 169 (33%) assessed study quality. Sixty-nine reviews investigated the impact of quality on study results in a quantitative manner. The new empirical studies estimated the bias associated with non-random allocation and found that the bias could lead to consistent over- or underestimations of treatment effects, also the bias increased variation in results for both historical and concurrent controls, owing to haphazard differences in case-mix between groups. The biases were large enough to lead studies falsely to conclude significant findings of benefit or harm. Four strategies for case-mix adjustment were evaluated: none adequately adjusted for bias in historically and concurrently controlled studies. Logistic regression on average increased bias. Propensity score methods performed better, but were not satisfactory in most situations. Detailed investigation revealed that adequate adjustment can only be achieved in the unrealistic situation when selection depends on a single factor. CONCLUSIONS: Results of non-randomised studies sometimes, but not always, differ from results of randomised studies of the same intervention. Non-randomised studies may still give seriously misleading results when treated and control groups appear similar in key prognostic factors. Standard methods of case-mix adjustment do not guarantee removal of bias. Residual confounding may be high even when good prognostic data are available, and in some situations adjusted results may appear more biased than unadjusted results. Although many quality assessment tools exist and have been used for appraising non-randomised studies, most omit key quality domains. Healthcare policies based upon non-randomised studies or systematic reviews of non-randomised studies may need re-evaluation if the uncertainty in the true evidence base was not fully appreciated when policies were made. The inability of case-mix adjustment methods to compensate for selection bias and our inability to identify non-randomised studies that are free of selection bias indicate that non-randomised studies should only be undertaken when RCTs are infeasible or unethical. Recommendations for further research include: applying the resampling methodology in other clinical areas to ascertain whether the biases described are typical; developing or refining existing quality assessment tools for non-randomised studies; investigating how quality assessments of non-randomised studies can be incorporated into reviews and the implications of individual quality features for interpretation of a review's results; examination of the reasons for the apparent failure of case-mix adjustment methods; and further evaluation of the role of the propensity score.

2,651 citations

Journal ArticleDOI
TL;DR: A systematic literature search found that among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published, and the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall.
Abstract: Background Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials — and the outcomes within those trials — can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio. Methods We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. We conducted a systematic literature search to identify matching publications. For trials that were reported in the literature, we compared the published outcomes with the FDA outcomes. We also compared the effect size derived from the published reports with the effect size derived from the entire FDA data set. Results Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 stu...

2,176 citations

Journal ArticleDOI
13 Mar 2008-BMJ
TL;DR: The average bias associated with defects in the conduct of randomised trials varies with the type of outcome, andSystematic reviewers should routinely assess the risk of bias in the results of trials, and should report meta-analyses restricted to trials at low risk of biases.
Abstract: OBJECTIVE: To examine whether the association of inadequate or unclear allocation concealment and lack of blinding with biased estimates of intervention effects varies with the nature of the intervention or outcome. DESIGN: Combined analysis of data from three meta-epidemiological studies based on collections of meta-analyses. DATA SOURCES: 146 meta-analyses including 1346 trials examining a wide range of interventions and outcomes. MAIN OUTCOME MEASURES: Ratios of odds ratios quantifying the degree of bias associated with inadequate or unclear allocation concealment, and lack of blinding, for trials with different types of intervention and outcome. A ratio of odds ratios <1 implies that inadequately concealed or non-blinded trials exaggerate intervention effect estimates. RESULTS: In trials with subjective outcomes effect estimates were exaggerated when there was inadequate or unclear allocation concealment (ratio of odds ratios 0.69 (95% CI 0.59 to 0.82)) or lack of blinding (0.75 (0.61 to 0.93)). In contrast, there was little evidence of bias in trials with objective outcomes: ratios of odds ratios 0.91 (0.80 to 1.03) for inadequate or unclear allocation concealment and 1.01 (0.92 to 1.10) for lack of blinding. There was little evidence for a difference between trials of drug and non-drug interventions. Except for trials with all cause mortality as the outcome, the magnitude of bias varied between meta-analyses. CONCLUSIONS: The average bias associated with defects in the conduct of randomised trials varies with the type of outcome. Systematic reviewers should routinely assess the risk of bias in the results of trials, and should report meta-analyses restricted to trials at low risk of bias either as the primary analysis or in conjunction with less restrictive analyses.

2,093 citations