scispace - formally typeset
Search or ask a question
Author

Deborah J. Cook

Bio: Deborah J. Cook is an academic researcher from McMaster University. The author has contributed to research in topics: Intensive care & Intensive care unit. The author has an hindex of 173, co-authored 907 publications receiving 148928 citations. Previous affiliations of Deborah J. Cook include McMaster University Medical Centre & Queen's University.


Papers
More filters
Journal ArticleDOI
TL;DR: The COVID Collaborative as discussed by the authors was formed at the onset of the COVID-19 pandemic to proactively coordinate studies, help navigate multiple authentic consent encounters by different research staff, and determine which studies would be suitable for coenrollment.
Abstract: OBJECTIVES: Proliferation of COVID-19 research underscored the need for improved awareness among investigators, research staff and bedside clinicians of the operational details of clinical studies. The objective was to describe the genesis, goals, participation, procedures, and outcomes of two research operations committees in an academic ICU during the COVID-19 pandemic. DESIGN: Two-phase, single-center multistudy cohort. SETTING: University-affiliated ICU in Hamilton, ON, Canada. PATIENTS: Adult patients in the ICU, medical stepdown unit, or COVID-19 ward. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: An interprofessional COVID Collaborative was convened at the pandemic onset within our department, to proactively coordinate studies, help navigate multiple authentic consent encounters by different research staff, and determine which studies would be suitable for coenrollment. From March 2020 to May 2021, five non-COVID trials continued, two were paused then restarted, and five were launched. Over 15 months, 161 patients were involved in 215 trial enrollments, 110 (51.1%) of which were into a COVID treatment trial. The overall informed consent rate (proportion agreed of those eligible and approached including a priori and deferred consent models) was 83% (215/259). The informed consent rate was lower for COVID-19 trials (110/142, 77.5%) than other trials (105/117, 89.7%; p = 0.01). Patients with COVID-19 were significantly more likely to be coenrolled in two or more studies (29/77, 37.7%) compared with other patients (13/84, 15.5%; p = 0.002). Review items for each new study were collated, refined, and evolved into a modifiable checklist template to set up each study for success. The COVID Collaborative expanded to a more formal Department of Critical Care Research Operations Committee in June 2021, supporting sustainable research operations during and beyond the pandemic. CONCLUSIONS: Structured coordination and increased communication about research operations among diverse research stakeholders cultivated a sense of shared purpose and enhanced the integrity of clinical research operations.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations