scispace - formally typeset
Search or ask a question
Author

Deborah J. Cook

Bio: Deborah J. Cook is an academic researcher from McMaster University. The author has contributed to research in topics: Intensive care & Intensive care unit. The author has an hindex of 173, co-authored 907 publications receiving 148928 citations. Previous affiliations of Deborah J. Cook include McMaster University Medical Centre & Queen's University.


Papers
More filters
Journal ArticleDOI
TL;DR: The development, organization, and operational methods of these groups illustrate several collaborative models for clinical investigations in the intensive care unit, highlighting a cohesive spirit, a sense of mission to achieve shared research goals, and acknowledgment that such an organization is much more than the sum of its parts.
Abstract: ObjectiveTo describe the development, organization, and operation of several collaborative groups conducting investigator-initiated multicenter clinical research in adult critical care.DesignTo review the process by which investigator-initiated critical care clinical research groups were created usi

50 citations

Journal ArticleDOI
TL;DR: Pretest probability and a modified CPIS, which excludes culture results, are of limited utility in the diagnosis of late-onset ventilator-associated pneumonia.

50 citations

Journal ArticleDOI
TL;DR: This paper reviews the steps associated with recognising the opportunity and the need to include quality of life instruments during the investigation, choosing the most suitable instrument(s) and interpreting the results.
Abstract: The importance of measuring changes in a patient’s quality of life when evaluating the efficacy of new drugs is increasingly recognised. In this paper, we review the steps associated with this process — recognising the opportunity and the need to include quality of life instruments during the investigation, choosing the most suitable instrument(s) and interpreting the results. To be useful in clinical trials, quality of life measures must be both responsive (able to detect all important differences) and valid. Generic instruments are applicable to a wide variety of populations but may lack responsiveness. Disease-specific instruments are more likely to be responsive and are directly relevant to patients and clinicians. The approach to measurement in a specific clinical trial should be dictated by the goals of the investigators.

50 citations

Journal ArticleDOI
TL;DR: Recommendations are proposed to clinical investigators and research ethics committees regarding clinical and health services research on pandemic-related critical illness, and strategies such as expedited and centralized research ethics committee reviews and alternate consent models are proposed.
Abstract: Pandemic H1N1 influenza is projected to be unprecedented in its scope, causing acute critical illness among thousands of young otherwise healthy adults, who will need advanced life support. Rigorous, relevant, timely, and ethical clinical and health services research is crucial to improve their care

50 citations

Journal ArticleDOI
TL;DR: The results showed that coagulation and platelet function are impaired by all 3 colloids, and the liberal use of colloids may be called into question because of the negative effects on coagulations and difficulties in reversing the effects.
Abstract: It is not uncommon for patients to have an expected death in an ICU. This review covers issues related to the end of life in the absence of discordance between the patient's family and caregivers.

49 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations