scispace - formally typeset
Search or ask a question
Author

Deborah J. Cook

Bio: Deborah J. Cook is an academic researcher from McMaster University. The author has contributed to research in topics: Intensive care & Intensive care unit. The author has an hindex of 173, co-authored 907 publications receiving 148928 citations. Previous affiliations of Deborah J. Cook include McMaster University Medical Centre & Queen's University.


Papers
More filters
Journal ArticleDOI
TL;DR: Probiotics appear to reduce infectious complications including ventilator-associated pneumonia and may influence intensive care unit mortality, however, clinical and statistical heterogeneity and imprecise estimates preclude strong clinical recommendations.
Abstract: Objective:Critical illness results in changes to the microbiology of the gastrointestinal tract, leading to a loss of commensal flora and an overgrowth of potentially pathogenic bacteria. Administering certain strains of live bacteria (probiotics) to critically ill patients may restore balance to th

111 citations

Journal ArticleDOI
TL;DR: In patients with sepsis, selenium supplementation at doses higher than daily requirement may reduce mortality.
Abstract: Background:Patients with sepsis syndrome commonly have low serum selenium levels. Several randomized controlled trials have examined the efficacy of selenium supplementation on mortality in patients with sepsis.Objective:To determine the efficacy and safety of high-dose selenium supplementation comp

111 citations

Journal ArticleDOI
TL;DR: An index of scientific quality for health-related news reports and tested its reliability and sensibility, finding the index was found to be sensible with only one major problem, the need for judgment in making ratings.

110 citations

Journal ArticleDOI
TL;DR: These baseline bleeding rates can inform the design of future clinical trials in critical care that use bleeding as an outcome and HEME is a useful tool to measure bleeding in critically ill patients.
Abstract: Purpose: To estimate the incidence, severity, duration and consequences of bleeding during critical illness, and to test the performance characteristics of a new bleeding assessment tool. Methods: Clinical bleeding assessments were performed prospectively on 100 consecutive patients admitted to a medical-surgical intensive care unit (ICU) using a novel bleeding measurement tool called HEmorrhage MEasurement (HEME). Bleeding assessments were done daily in duplicate and independently by blinded, trained assessors. Inter-rater agreement and construct validity of the HEME tool were calculated using φ. Risk factors for major bleeding were identified using a multivariable Cox proportional hazards model. Results: Overall, 90% of patients experienced a total of 480 bleeds of which 94.8% were minor and 5.2% were major. Inter-rater reliability of the HEME tool was excellent (φ = 0.98, 95% CI: 0.96 to 0.99). A decrease in platelet count and a prolongation of partial thromboplastin time were independent risk factors for major bleeding but neither were renal failure nor prophylactic anticoagulation. Patients with major bleeding received more blood transfusions and had longer ICU stays compared to patients with minor or no bleeding. Conclusions: Bleeding, although primarily minor, occurred in the majority of ICU patients. One of five patients experienced a major bleed which was associated with abnormal coagulation tests but not with prophylactic anticoagulants. These baseline bleeding rates can inform the design of future clinical trials in critical care that use bleeding as an outcome and HEME is a useful tool to measure bleeding in critically ill patients.

110 citations

Journal Article
TL;DR: The calculation of measures of association are shown and their usefulness in clinical decision making is discussed and both the absolute risk reduction and the number needed to treat reflect both the baseline risk and the relative risk reduction.
Abstract: In the third of a series of four articles the authors show the calculation of measures of association and discuss their usefulness in clinical decision making. From the rates of death or other "events" in experimental and control groups in a clinical trial, we can calculate the relative risk (RR) of the event after the experimental treatment, expressed as a percentage of the risk without such treatment. The absolute risk reduction (ARR) is the difference in the risk of an event between the groups. The relative risk reduction is the percentage of the baseline risk (the risk of an event in the control patients) removed as a result of therapy. The odds ratio (OR), which is the measure of choice in case-control studies, gives the ratio of the odds of an event in the experimental group to those in the control group. The OR and the RR provide limited information in reporting the results of prospective trials because they do not reflect changes in the baseline risk. The ARR and the number needed to treat, which tells the clinician how many patients need to be treated to prevent one event, reflect both the baseline risk and the relative risk reduction. If the timing of events is important--to determine whether treatment extends life, for example--survival curves are used to show when events occur over time.

109 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations