scispace - formally typeset
Search or ask a question
Author

Deborah J. Cook

Bio: Deborah J. Cook is an academic researcher from McMaster University. The author has contributed to research in topics: Intensive care & Intensive care unit. The author has an hindex of 173, co-authored 907 publications receiving 148928 citations. Previous affiliations of Deborah J. Cook include McMaster University Medical Centre & Queen's University.


Papers
More filters
Journal Article
TL;DR: A 69-year-old woman presenting with dyspnea had a pericardial window created for fibrinous pericARDitis and a primary tumour of the pulmonary artery was suggested, which confirmed a spindle cell pulmonary artery sarcoma.

13 citations

Journal ArticleDOI
TL;DR: To illustrate how the history and physical examination are used as diagnostic tests, a patient is followed through an encounter with a general internist and a clinical focus will be the diagnosis of cerebrovascular and peripheral vascular disease.
Abstract: The history and physical examination of a patient remain the cornerstones of clinical medicine. Without an adequate history and physical examination to suggest possible differential diagnoses, the subsequent investigations of the patient may be endless (and fruitless). Although we rely heavily on the clinical examination, until recently there has been little formal evaluation of the information gained from these clinical encounters. A series entitled “The Rational Clinical Exam” in the Journal of the American Medical Association is now making a key contribution to our understanding by critiquing and summarizing the value of the evidence obtained during the initial patient-clinician encounter.1 In each encounter, we gather information that aids us in establishing a relationship with our patients, generating diagnoses, estimating prognoses, and initiating and monitoring our patients’ response to therapy. Generating diagnoses is an iterative process that includes information gathering and hypothesis generation. Data acquisition may begin with the chief complaint, history of present illness, past medical history, and findings from the physical examination. Information gathered at any stage in the clinical examination may be sufficient for hypothesis generation and a partial diagnosis that prompts action. With each new piece of information, the diagnoses that are considered, and their relative likelihoods, may change. Thus, we can consider components of the history and physical examination as individual diagnostic tests, from which sequential information is obtained that helps to rule in or rule out specific diagnoses. As with laboratory diagnostic tests, when considering relevant clinical skills as diagnostic tests, we must understand their properties of reliability and accuracy, and the appropriate use of likelihood ratios (LRs). To illustrate how the history and physical examination are used as diagnostic tests, we will follow a patient through an encounter with a general internist. Our clinical focus will be the diagnosis of cerebrovascular and peripheral vascular disease. At each step of the interaction, we will highlight the relevant clinical skills literature and the related diagnostic test properties, and demonstrate how application of this evidence increases the physician’s understanding of the patient’s problems, and guides subsequent management decisions.

13 citations

Journal ArticleDOI
TL;DR: A scenario-based, cross-sectional survey of Canadian critical care medicine and infectious disease specialists about the use of intravenous immunoglobulin for the treatment of severe infections found specialist's beliefs surrounding the efficacy of IVIG would challenge but not preclude the conduct of future placebo controlled trials of severe streptococcal infections.

13 citations

Journal ArticleDOI
TL;DR: Frailty is a highly prevalent prognostic factor that can be used to risk-stratify older emergency department patients with suspected infection and ED clinicians should consider screening for frailty to optimize disposition in this population.
Abstract: Background Prognosis and disposition among older emergency department (ED) patients with suspected infection remains challenging. Frailty is increasingly recognized as a predictor of poor prognosis among critically ill patients; however, its association with clinical outcomes among older ED patients with suspected infection is unknown. Methods We conducted a multicenter prospective cohort study at two tertiary care EDs. We included older ED patients (≥75 years) with suspected infection. Frailty at baseline (before index illness) was explicitly measured for all patients by the treating physicians using the Clinical Frailty Scale (CFS). We defined frailty as a CFS 5–8. The primary outcome was 30-day mortality. We used multivariable logistic regression to adjust for known confounders. We also compared the prognostic accuracy of frailty with the Systemic Inflammatory Response Syndrome (SIRS) and Quick Sequential Organ Failure Assessment (qSOFA) criteria. Results We enrolled 203 patients, of whom 117 (57.6%) were frail. Frail patients were more likely to develop septic shock (adjusted odds ratio [aOR], 1.83; 95% confidence interval [CI], 1.08–2.51) and more likely to die within 30 days of ED presentation (aOR 2.05; 95% CI, 1.02–5.24). Sensitivity for mortality was highest among the CFS (73.1%; 95% CI, 52.2–88.4), compared with SIRS ≥ 2 (65.4%; 95% CI, 44.3–82.8) or qSOFA ≥ 2 (38.4; 95% CI, 20.2–59.4). Conclusions Frailty is a highly prevalent prognostic factor that can be used to risk-stratify older ED patients with suspected infection. ED clinicians should consider screening for frailty to optimize disposition in this population.

13 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations