scispace - formally typeset
Search or ask a question
Author

Deborah J. Cook

Bio: Deborah J. Cook is an academic researcher from McMaster University. The author has contributed to research in topics: Intensive care & Intensive care unit. The author has an hindex of 173, co-authored 907 publications receiving 148928 citations. Previous affiliations of Deborah J. Cook include McMaster University Medical Centre & Queen's University.


Papers
More filters
Journal ArticleDOI
TL;DR: The forthcoming series of articles on systematic reviews that begins with the paper by Cook and colleagues in this issue has been designed to collate and update that information on preparing, understanding, and using systematic reviews.
Abstract: Successful clinical decisions, like most human decisions, are complex creatures [1]. In making them, we draw on information from many sources: primary data and patient preferences, our own clinical and personal experience, external rules and constraints, and scientific evidence (Figure 1). The mix of inputs to clinical decisions varies from moment to moment and from day to day, depending on the decision and the decision makers. In general, however, the proportion of scientific evidence in the mix has grown progressively over the past 150 years or so. Figure 1. Factors that enter into clinical decisions. One major reason why the mix has changed is simply the explosive increase in the amount and quality of the scientific evidence that has come from both the laboratory bench and the bedside. The maelstrom of change wrought by the molecular biology revolution has been matched at the clinical level by a tidal wave of increasingly sophisticated clinical trials. It is estimated that since the results of the first randomized clinical trials in medicine were published in the 1940s [2], roughly 100 000 randomized and controlled clinical trials have appeared in print [3], and the results of many well-conducted, completed trials remain unpublished [4]. A second reason for the growing emphasis on scientific evidence is the increasing expectation, from both within and outside of the medical profession, that physicians will produce and use the evidence in delivering care. The future holds the promise of continued expansion of the body of research information. However, it also holds the parallel threat of increasingly inadequate time and resources with which to find, evaluate, and incorporate new research knowledge into everyday clinical decision making. Fortunately, mechanisms are emerging that will help us acquire the best, most compelling, and most current research evidence. Particularly promising in this regard is the use of systematic reviews. Systematic reviews are concise summaries of the best available evidence that address sharply defined clinical questions [5, 6]. Of course, the concept of reviews in medicine is not new. Preparation of reviews has traditionally depended on implicit, idiosyncratic methods of data collection and interpretation. In contrast, systematic reviews use explicit and rigorous methods to identify, critically appraise, and synthesize relevant studies. As their name implies, systematic reviews-not satisfied with finding part of the truth-look for the whole truth. That is, they seek to assemble and examine all of the available high-quality evidence that bears on the clinical question at hand. Although it looks easy from the outside, producing a high-quality systematic review is extremely demanding. The realization of how difficult the task is should be reassuring to all of us who have been frustrated by our seeming inability to stay informed and up to date by combing through the literature ourselves. The concepts and techniques involved, including that of meta-analysis, are at least as subtle and complex as many of those currently used in molecular biology. In this connection, it is important to understand that a systematic review and a meta-analysis are not one and the same. Meta-analysis is a specific methodologic and statistical technique for combining quantitative data. As such, it is simply one of the tools-albeit a particularly important one-that is used in preparing systematic reviews. Although many of the techniques involved in creating a systematic review have been widely available for some time, the techniques for generating clinical recommendations that consider baseline risk, cost, and the totality of the evidence available from a systematic review constitute a relatively new area of research that requires dealing with a range of critical yet abstract issues, such as ambiguity, context, and confidence. Many articles describing the conceptual basis of systematic reviews have been published during the past decade [7], but detailed, how-to information on preparing, understanding, and using systematic reviews has been scattered and incomplete. The forthcoming series of articles on systematic reviews that begins with the paper by Cook and colleagues in this issue [8] has been designed to collate and update that information. Cook and colleagues describe systematic reviews in detail, discuss their strengths and limitations, and explain how they differ from traditional, narrative reviews. The remainder of the papers in the series are divided into two categories: using systematic reviews in practice and conducting reviews. These articles are primarily broad narrative overviews. In preparing them, their authors have drawn on widely varying sources, including electronic searches of the published literature, reference lists, the Cochrane Library [3], personal files, colleagues, and personal experience. Most of the articles are directed toward practitioners who wish to learn more about what systematic reviews are and how to use them. A few are directed primarily toward specific audiences, such as physician-educators. And we hope that the last articles in the series will entice some readers to join the growing number of groups that are doing the hard but intensely rewarding work of preparing systematic reviews. Some of the articles inevitably delve into technical and seemingly arcane methodologic topics, but we make no apologies for this. Medicine at all levels is technical, and pushing the envelope inevitably involves moving out into unfamiliar and sometimes uncomfortable territory. Perhaps more important, however, is that many aspects of the systematic review process will be familiar to clinicians because these techniques are similar to the ones they use every day: collecting, filtering, synthesizing, and applying information. How can the full potential of the knowledge contained in systematic reviews be realized in clinical practice? There is no simple answer, but the following would help. First, developers of electronic databases must, at the very least, pioneer improved-that is, more transparent and clinically meaningful-approaches to searching, thereby giving physicians rapid, sensitive, and specific access to multiple data sources. Second, we need many more systematic reviews that address the natural history and diagnosis of disease and the benefits and potential harms of health care interventions. Third, we need to champion the production of new, well-designed, high-quality research that evaluates important patient outcomes-the raw material of systematic reviews that is a crucial part of clinical decision making. And, finally, both physicians and the health care systems in which we work need to fully embrace and tangibly support lifelong learning as an essential element in the practice of good medicine. A recent related development is an international movement to improve the reporting of clinical research, particularly the results of randomized, controlled trials [9] and meta-analyses [10]. These efforts focus on clear, comprehensive communication of the methods and results of clinically relevant research through the development and application of reporting standards that are being suggested by editors, researchers, methodologists, and consumers. These standards should allow readers to better appraise, interpret, and apply the information in published reports of research in their own practices and situations. Perhaps equally important is the possibility that these standards will create a positive ripple effect, starting at the earliest stages of research planning and extending through the conduct of clinical trials. Exciting new information pouring out of the molecular biology revolution has the potential to transform medicine. But even this enormously powerful information will be of little use to physicians and their patients unless 1) the diagnostic and therapeutic interventions that flow from it are stringently tested in clinical trials and 2) the results of those trials are synthesized and made accessible to practitioners. Systematic reviews are thus a vital link in the great chain of evidence that stretches from the laboratory bench to the bedside. From this perspective, the awesome task of extracting the knowledge already encoded in the tens of thousands of high-quality clinical studies, published and unpublished, is arguably every bit as important to our health and well-being as the molecular biology enterprise itself. The task can only grow in size and importance as more and better trials are conducted; indeed, the task has already been likened in scope and importance to the Human Genome Project [11]. It is our earnest hope that these articles on systematic reviews will play a useful part in strengthening the chain of evidence that links research to practice. Dr. Cook: Department of Medicine, St. Joseph's Hospital, 50 Charlton Avenue East, Hamilton, Ontario L8N 4A6, Canada. Dr. Davidoff: Annals of Internal Medicine, American College of Physicians, Independence Mall West, Sixth Street at Race, Philadelphia, PA 19106.

271 citations

Journal ArticleDOI
TL;DR: A candidate framework for such a system, based on the infection, the host response, and the extent of organ dysfunction (the IRO system) is described.
Abstract: Background Sepsis is not a single disease but a complex and heterogeneous process. Its expression is variable, and its severity is influenced by the nature of the infection, the genetic background of the patient, the time to clinical intervention, the supportive care provided by the clinician, and a number of factors as yet unknown. The evaluation of effective therapies has been hampered by limitations in our ability to characterize the process and to stratify patients into more homogeneous groups with respect to pathogenesis. Objectives To develop a taxonomy of markers relevant to clinical research in sepsis and to propose a testable candidate system for stratifying patients into more therapeutically homogeneous groups. Data source An expert roundtable discussion and a MEDLINE review using search terms "marker" and "sepsis." Results Markers provide information in one or more of three domains: diagnosis, prognosis, and response to therapy. More than 80 putative markers of sepsis have been described. All correlate with the risk of mortality (prognosis), yet none has shown utility in stratifying patients with respect to therapy (diagnosis) or in titrating that therapy (response). Their limitations arise from the challenges of establishing causality in a complex disease process such as sepsis and of stratifying patients into more homogeneous populations. The former limitation may be addressed through a modification of Koch's postulates to differentiate causality from simple association. The latter suggests the need for a staging system analogous to those used in other complex disease processes such as cancer. A candidate framework for such a system, based on the infection, the host response, and the extent of organ dysfunction (the IRO system) is described. Conclusions Advances in the understanding and management of patients with sepsis will necessitate more rigorous approaches to disease description and stratification. Models should be developed, tested, and modified through clinical studies rather than through consensus.

270 citations

Journal ArticleDOI
TL;DR: Overall, there is little evidence of clinically important benefit of respiratory muscle training in patients with with chronic airflow limitation, but the possibility that benefit may result if resistance training is conducted in a fashion that ensures generation of adequate mouth pressures may be worthy of further study.
Abstract: The purpose of this study was to determine the effect of respiratory muscle training on muscle strength and endurance, exercise capacity, and functional status in patients with chronic airflow limitation. Computerized bibliographic data bases (MEDLINE AND SCISEARCH) were searched for published clinical trails, and an independent review of 73 articles by two of the investigators identified 17 relevant randomized trials for inclusion. Study quality was assessed and descriptive information concerning the study populations, interventions, and outcome measurements was extracted. We combined effect sizes across studies (the difference between treatment and control groups divided by the pooled standard deviation of the outcome measure). Across all studies, the effect sizes and associated p-values were as follows: maximal inspiratory pressure 0.12, p = 0.38; maximal voluntary ventilation 0.43, p = 0.02; respiratory muscle endurance 0.21, p = 0.14; laboratory exercise capacity -0.01, p = 0.43; functional exercise capacity 0.20, p = 0.15; functional status 0.06, p = 0.72. Secondary analyses suggested that endurance and function may be improved if resistance training with control of breathing pattern is undertaken. Overall, there is little evidence of clinically important benefit of respiratory muscle training in patients with with chronic airflow limitation. The possibility that benefit may result if resistance training is conducted in a fashion that ensures generation of adequate mouth pressures may be worthy of further study.

263 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations