scispace - formally typeset
Search or ask a question
Author

Lucy Turner

Bio: Lucy Turner is an academic researcher from Ottawa Hospital Research Institute. The author has contributed to research in topics: Systematic review & Randomized controlled trial. The author has an hindex of 21, co-authored 29 publications receiving 6965 citations.

Papers
More filters
Journal ArticleDOI
12 Oct 2016-BMJ
TL;DR: Risk of Bias In Non-randomised Studies - of Interventions is developed, a new tool for evaluating risk of bias in estimates of the comparative effectiveness of interventions from studies that did not use randomisation to allocate units or clusters of individuals to comparison groups.
Abstract: Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.

8,028 citations

Journal ArticleDOI
TL;DR: This review updates the earlier systematic review assessing whether journal endorsement of the 1996 and 2001 CONSORT checklists influences the completeness of reporting of RCTs published in medical journals.
Abstract: Background An overwhelming body of evidence stating that the completeness of reporting of randomised controlled trials (RCTs) is not optimal has accrued over time. In the mid-1990s, in response to these concerns, an international group of clinical trialists, statisticians, epidemiologists, and biomedical journal editors developed the CONsolidated Standards Of Reporting Trials (CONSORT) Statement. The CONSORT Statement, most recently updated in March 2010, is an evidence-based minimum set of recommendations including a checklist and flow diagram for reporting RCTs and is intended to facilitate the complete and transparent reporting of trials and aid their critical appraisal and interpretation. In 2006, a systematic review of eight studies evaluating the "effectiveness of CONSORT in improving reporting quality in journals" was published. Objectives To update the earlier systematic review assessing whether journal endorsement of the 1996 and 2001 CONSORT checklists influences the completeness of reporting of RCTs published in medical journals. Search methods We conducted electronic searches, known item searching, and reference list scans to identify reports of evaluations assessing the completeness of reporting of RCTs. The electronic search strategy was developed in MEDLINE and tailored to EMBASE. We searched the Cochrane Methodology Register and the Cochrane Database of Systematic Reviews using the Wiley interface. We searched the Science Citation Index, Social Science Citation Index, and Arts and Humanities Citation Index through the ISI Web of Knowledge interface. We conducted all searches to identify reports published between January 2005 and March 2010, inclusive. Selection criteria In addition to studies identified in the original systematic review on this topic, comparative studies evaluating the completeness of reporting of RCTs in any of the following comparison groups were eligible for inclusion in this review: 1) Completeness of reporting of RCTs published in journals that have and have not endorsed the CONSORT Statement; 2) Completeness of reporting of RCTs published in CONSORT-endorsing journals before and after endorsement; or 3) Completeness of reporting of RCTs before and after the publication of the CONSORT Statement (1996 or 2001). We used a broad definition of CONSORT endorsement that includes any of the following: (a) requirement or recommendation in journal's 'Instructions to Authors' to follow CONSORT guidelines; (b) journal editorial statement endorsing the CONSORT Statement; or (c) editorial requirement for authors to submit a CONSORT checklist and/or flow diagram with their manuscript. We contacted authors of evaluations reporting data that could be included in any comparison group(s), but not presented as such in the published report and asked them to provide additional data in order to determine eligibility of their evaluation. Evaluations were not excluded due to language of publication or validity assessment. Data collection and analysis We completed screening and data extraction using standardised electronic forms, where conflicts, reasons for exclusion, and level of agreement were all automatically and centrally managed in web-based management software, DistillerSR®. One of two authors extracted general characteristics of included evaluations and all data were verified by a second author. Data describing completeness of reporting were extracted by one author using a pre-specified form; a 10% random sample of evaluations was verified by a second author. Any discrepancies were discussed by both authors; we made no modifications to the extracted data. Validity assessments of included evaluations were conducted by one author and independently verified by one of three authors. We resolved all conflicts by consensus. For each comparison we collected data on 27 outcomes: 22 items of the CONSORT 2001 checklist, plus four items relating to the reporting of blinding, and one item of aggregate CONSORT scores. Where reported, we extracted and qualitatively synthesised data on the methodological quality of RCTs, by scale or score. Main results Fifty-three publications reporting 50 evaluations were included. The total number of RCTs assessed within evaluations was 16,604 (median per evaluation 123 (interquartile range (IQR) 77 to 226) published in a median of six (IQR 3 to 26) journals. Characteristics of the included RCT populations were variable, resulting in heterogeneity between included evaluations. Validity assessments of included studies resulted in largely unclear judgements. The included evaluations are not RCTs and less than 8% (4/53) of the evaluations reported adjusting for potential confounding factors. Twenty-five of 27 outcomes assessing completeness of reporting in RCTs appeared to favour CONSORT-endorsing journals over non-endorsers, of which five were statistically significant. 'Allocation concealment' resulted in the largest effect, with risk ratio (RR) 1.81 (99% confidence interval (CI) 1.25 to 2.61), suggesting that 81% more RCTs published in CONSORT-endorsing journals adequately describe allocation concealment compared to those published in non-endorsing journals. Allocation concealment was reported adequately in 45% (393/876) of RCTs in CONSORT-endorsing journals and in 22% (329/1520) of RCTs in non-endorsing journals. Other outcomes with results that were significant include: scientific rationale and background in the 'Introduction' (RR 1.07, 99% CI 1.01 to 1.14); 'sample size' (RR 1.61, 99% CI 1.13 to 2.29); method used for 'sequence generation' (RR 1.59, 99% CI 1.38 to 1.84); and an aggregate score over reported CONSORT items, 'total sum score' (standardised mean difference (SMD) 0.68 (99% CI 0.38 to 0.98)). Authors' conclusions Evidence has accumulated to suggest that the reporting of RCTs remains sub-optimal. This review updates a previous systematic review of eight evaluations. The findings of this review are similar to those from the original review and demonstrate that, despite the general inadequacies of reporting of RCTs, journal endorsement of the CONSORT Statement may beneficially influence the completeness of reporting of trials published in medical journals. Future prospective studies are needed to explore the influence of the CONSORT Statement dependent on the extent of editorial policies to ensure adherence to CONSORT guidance.

591 citations

Journal ArticleDOI
TL;DR: The effectiveness of QI strategies varied depending on baseline HbA(1c) control, and interventions targeting the system of chronic disease management along with patient-mediatedQI strategies should be an important component of interventions aimed at improving diabetes management.

586 citations

Journal ArticleDOI
TL;DR: It is suggested that journal endorsement of CONSORT may benefit the completeness of reporting of RCTs they publish, and between endorsing and non-endorsing journals, 25 outcomes are improved with CONSORT endorsement, five of these significantly.
Abstract: The Consolidated Standards of Reporting Trials (CONSORT) Statement is intended to facilitate better reporting of randomised clinical trials (RCTs). A systematic review recently published in the Cochrane Library assesses whether journal endorsement of CONSORT impacts the completeness of reporting of RCTs; those findings are summarised here. Evaluations assessing the completeness of reporting of RCTs based on any of 27 outcomes formulated based on the 1996 or 2001 CONSORT checklists were included; two primary comparisons were evaluated. The 27 outcomes were: the 22 items of the 2001 CONSORT checklist, four sub-items describing blinding and a ‘total summary score’ of aggregate items, as reported. Relative risks (RR) and 99% confidence intervals were calculated to determine effect estimates for each outcome across evaluations. Fifty-three reports describing 50 evaluations of 16,604 RCTs were assessed for adherence to at least one of 27 outcomes. Sixty-nine of 81 meta-analyses show relative benefit from CONSORT endorsement on completeness of reporting. Between endorsing and non-endorsing journals, 25 outcomes are improved with CONSORT endorsement, five of these significantly (α = 0.01). The number of evaluations per meta-analysis was often low with substantial heterogeneity; validity was assessed as low or unclear for many evaluations. The results of this review suggest that journal endorsement of CONSORT may benefit the completeness of reporting of RCTs they publish. No evidence suggests that endorsement hinders the completeness of RCT reporting. However, despite relative improvements when CONSORT is endorsed by journals, the completeness of reporting of trials remains sub-optimal. Journals are not sending a clear message about endorsement to authors submitting manuscripts for publication. As such, fidelity of endorsement as an ‘intervention’ has been weak to date. Journals need to take further action regarding their endorsement and implementation of CONSORT to facilitate accurate, transparent and complete reporting of trials.

488 citations

Journal ArticleDOI
TL;DR: 13 evidence-based characteristics by which predatory journals may potentially be distinguished from presumed legitimate journals are identified may be useful for authors who are assessing journals for possible submission or for others, such as universities evaluating candidates’ publications as part of the hiring process.
Abstract: The Internet has transformed scholarly publishing, most notably, by the introduction of open access publishing. Recently, there has been a rise of online journals characterized as ‘predatory’, which actively solicit manuscripts and charge publications fees without providing robust peer review and editorial services. We carried out a cross-sectional comparison of characteristics of potential predatory, legitimate open access, and legitimate subscription-based biomedical journals. On July 10, 2014, scholarly journals from each of the following groups were identified – potential predatory journals (source: Beall’s List), presumed legitimate, fully open access journals (source: PubMed Central), and presumed legitimate subscription-based (including hybrid) journals (source: Abridged Index Medicus). MEDLINE journal inclusion criteria were used to screen and identify biomedical journals from within the potential predatory journals group. One hundred journals from each group were randomly selected. Journal characteristics (e.g., website integrity, look and feel, editors and staff, editorial/peer review process, instructions to authors, publication model, copyright and licensing, journal location, and contact) were collected by one assessor and verified by a second. Summary statistics were calculated. Ninety-three predatory journals, 99 open access, and 100 subscription-based journals were analyzed; exclusions were due to website unavailability. Many more predatory journals’ homepages contained spelling errors (61/93, 66%) and distorted or potentially unauthorized images (59/93, 63%) compared to open access journals (6/99, 6% and 5/99, 5%, respectively) and subscription-based journals (3/100, 3% and 1/100, 1%, respectively). Thirty-one (33%) predatory journals promoted a bogus impact metric – the Index Copernicus Value – versus three (3%) open access journals and no subscription-based journals. Nearly three quarters (n = 66, 73%) of predatory journals had editors or editorial board members whose affiliation with the journal was unverified versus two (2%) open access journals and one (1%) subscription-based journal in which this was the case. Predatory journals charge a considerably smaller publication fee (median $100 USD, IQR $63–$150) than open access journals ($1865 USD, IQR $800–$2205) and subscription-based hybrid journals ($3000 USD, IQR $2500–$3000). We identified 13 evidence-based characteristics by which predatory journals may potentially be distinguished from presumed legitimate journals. These may be useful for authors who are assessing journals for possible submission or for others, such as universities evaluating candidates’ publications as part of the hiring process.

281 citations


Cited by
More filters
Journal ArticleDOI
29 Mar 2021-BMJ
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement as discussed by the authors was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found.
Abstract: The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

16,613 citations

Journal ArticleDOI
TL;DR: A reporting guideline is described, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015), which consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review.
Abstract: Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate the understanding and appraisal of the review methods, as well as the detection of modifications to methods and selective reporting in completed reviews. We describe the development of a reporting guideline, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015). PRISMA-P consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review. Funders and those commissioning reviews might consider mandating the use of the checklist to facilitate the submission of relevant protocol information in funding applications. Similarly, peer reviewers and editors can use the guidance to gauge the completeness and transparency of a systematic review protocol submitted for publication in a journal or other medium.

14,708 citations

Journal ArticleDOI
02 Jan 2015-BMJ
TL;DR: The PRISMA-P checklist as mentioned in this paper provides 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol, as well as a model example from an existing published protocol.
Abstract: Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for the presence of selective reporting against completed reviews, and, when made publicly available, reduce duplication of efforts and potentially prompt collaboration. Evidence documenting the existence of selective reporting and excessive duplication of reviews on the same or similar topics is accumulating and many calls have been made in support of the documentation and public availability of review protocols. Several efforts have emerged in recent years to rectify these problems, including development of an international register for prospective reviews (PROSPERO) and launch of the first open access journal dedicated to the exclusive publication of systematic review products, including protocols (BioMed Central's Systematic Reviews). Furthering these efforts and building on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, an international group of experts has created a guideline to improve the transparency, accuracy, completeness, and frequency of documented systematic review and meta-analysis protocols--PRISMA-P (for protocols) 2015. The PRISMA-P checklist contains 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol.This PRISMA-P 2015 Explanation and Elaboration paper provides readers with a full understanding of and evidence about the necessity of each item as well as a model example from an existing published protocol. This paper should be read together with the PRISMA-P 2015 statement. Systematic review authors and assessors are strongly encouraged to make use of PRISMA-P when drafting and appraising review protocols.

9,361 citations

Journal ArticleDOI
28 Aug 2019-BMJ
TL;DR: The Cochrane risk-of-bias tool has been updated to respond to developments in understanding how bias arises in randomised trials, and to address user feedback on and limitations of the original tool.
Abstract: Assessment of risk of bias is regarded as an essential component of a systematic review on the effects of an intervention. The most commonly used tool for randomised trials is the Cochrane risk-of-bias tool. We updated the tool to respond to developments in understanding how bias arises in randomised trials, and to address user feedback on and limitations of the original tool.

9,228 citations