scispace - formally typeset
Search or ask a question
Author

Peter C Gøtzsche

Bio: Peter C Gøtzsche is an academic researcher from Cochrane Collaboration. The author has contributed to research in topics: Systematic review & Placebo. The author has an hindex of 90, co-authored 413 publications receiving 147009 citations. Previous affiliations of Peter C Gøtzsche include University of Copenhagen & Copenhagen University Hospital.


Papers
More filters
Journal ArticleDOI
TL;DR: In the context of the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE), the authors formularon recomendaciones sobre lo que deberia contener una notificacion precisa de un estudio observacional.

296 citations

Journal ArticleDOI
20 Nov 2012-BMJ
TL;DR: General health checks did not reduce morbidity or mortality, neither overall nor for cardiovascular or cancer causes, although they increased the number of new diagnoses.
Abstract: Objectives To quantify the benefits and harms of general health checks in adults with an emphasis on patient-relevant outcomes such as morbidity and mortality rather than on surrogate outcomes. Design Cochrane systematic review and meta-analysis of randomised trials. For mortality, we analysed the results with random effects meta-analysis, and for other outcomes we did a qualitative synthesis as meta-analysis was not feasible. Data sources Medline, EMBASE, Healthstar, Cochrane Library, Cochrane Central Register of Controlled Trials, CINAHL, EPOC register, ClinicalTrials.gov, and WHO ICTRP, supplemented by manual searches of reference lists of included studies, citation tracking (Web of Knowledge), and contacts with trialists. Selection criteria Randomised trials comparing health checks with no health checks in adult populations unselected for disease or risk factors. Health checks defined as screening general populations for more than one disease or risk factor in more than one organ system. We did not include geriatric trials. Data extraction Two observers independently assessed eligibility, extracted data, and assessed the risk of bias. We contacted authors for additional outcomes or trial details when necessary. Results We identified 16 trials, 14 of which had available outcome data (182 880 participants). Nine trials provided data on total mortality (11 940 deaths), and they gave a risk ratio of 0.99 (95% confidence interval 0.95 to 1.03). Eight trials provided data on cardiovascular mortality (4567 deaths), risk ratio 1.03 (0.91 to 1.17), and eight on cancer mortality (3663 deaths), risk ratio 1.01 (0.92 to 1.12). Subgroup and sensitivity analyses did not alter these findings. We did not find beneficial effects of general health checks on morbidity, hospitalisation, disability, worry, additional physician visits, or absence from work, but not all trials reported on these outcomes. One trial found that health checks led to a 20% increase in the total number of new diagnoses per participant over six years compared with the control group and an increased number of people with self reported chronic conditions, and one trial found an increased prevalence of hypertension and hypercholesterolaemia. Two out of four trials found an increased use of antihypertensives. Two out of four trials found small beneficial effects on self reported health, which could be due to bias. Conclusions General health checks did not reduce morbidity or mortality, neither overall nor for cardiovascular or cancer causes, although they increased the number of new diagnoses. Important harmful outcomes were often not studied or reported. Systematic review registration Cochrane Library, doi:10.1002/14651858.CD009009.

287 citations

Journal ArticleDOI
18 Aug 2010-BMJ
TL;DR: Systematic review of parallel group randomised clinical trials published in 2008 reporting a binary composite outcome found components are often unreasonably combined, inconsistently defined, and inadequately reported.
Abstract: Objective To study how composite outcomes, which have combined several components into a single measure, are defined, reported, and interpreted. Design Systematic review of parallel group randomised clinical trials published in 2008 reporting a binary composite outcome. Two independent observers extracted the data using a standardised data sheet, and two other observers, blinded to the results, selected the most important component. Results Of 40 included trials, 29 (73%) were about cardiovascular topics and 24 (60%) were entirely or partly industry funded. Composite outcomes had a median of three components (range 2–9). Death or cardiovascular death was the most important component in 33 trials (83%). Only one trial provided a good rationale for the choice of components. We judged that the components were not of similar importance in 28 trials (70%); in 20 of these, death was combined with hospital admission. Other major problems were change in the definition of the composite outcome between the abstract, methods, and results sections (13 trials); missing, ambiguous, or uninterpretable data (9 trials); and post hoc construction of composite outcomes (4 trials). Only 24 trials (60%) provided reliable estimates for both the composite and its components, and only six trials (15%) had components of similar, or possibly similar, clinical importance and provided reliable estimates. In 11 of 16 trials with a statistically significant composite, the abstract conclusion falsely implied that the effect applied also to the most important component. Conclusions The use of composite outcomes in trials is problematic. Components are often unreasonably combined, inconsistently defined, and inadequately reported. These problems will leave many readers confused, often with an exaggerated perception of how well interventions work.

269 citations

Journal Article
TL;DR: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement as mentioned in this paper is a checklist of items that should be addressed in articles reporting on the three main study designs of analytical epidemiology: cohort, case-control and cross-sectional studies.
Abstract: Introduction Many questions in medical research are investigated in observational studies (1) Much of the research into the cause of diseases relies on cohort, case-control or cross-sectional studies Observational studies also have a role in research into the benefits and harms of medical interventions (2) Randomized trials cannot answer all important questions about a given intervention For example, observational studies are more suitable to detect rare or late adverse effects of treatments, and are more likely to provide an indication of what is achieved in daily medical practice (3) Research should be reported transparently so that readers can follow what was planned, what was done, what was found, and what conclusions were drawn The credibility of research depends on a critical assessment by others of the strengths and weaknesses in study design, conduct and analysis Transparent reporting is also needed to judge whether and how results can be included in systematic reviews (4,5) However, in published observational research important information is often missing or unclear An analysis of epidemiological studies published in general medical and specialist journals found that the rationale behind the choice of potential confounding variables was often not reported (6) Only few reports of case-control studies in psychiatry explained the methods used to identify cases and controls (7) In a survey of longitudinal studies in stroke research, 17 of 49 articles (35%) did not specify the eligibility criteria (8) Others have argued that without sufficient clarity of reporting, the benefits of research might be achieved more slowly, (9) and that there is a need for guidance in reporting observational studies (10,11) Recommendations on the reporting of research can improve reporting quality The Consolidated Standards of Reporting Trials (CONSORT) Statement was developed in 1996 and revised five years later (12) Many medical journals supported this initiative, (13) which has helped to improve the quality of reports of randomized trials (14,15) Similar initiatives have followed for other research areas--eg for the reporting of meta-analyses of randomized trials (16) or diagnostic studies (17) We established a network of methodologists, researchers and journal editors to develop recommendations for the reporting of observational research: the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement Aims and use of the STROBE Statement The STROBE Statement is a checklist of items that should be addressed in articles reporting on the three main study designs of analytical epidemiology: cohort, case-control and cross-sectional studies The intention is solely to provide guidance on how to report observational research well: these recommendations are not prescriptions for designing or conducting studies Also, while clarity of reporting is a prerequisite to evaluation, the checklist is not an instrument to evaluate the quality of observational research Here we present the STROBE Statement and explain how it was developed In a detailed companion paper, the Explanation and Elaboration article, (18-20) we justify the inclusion of the different checklist items, and give methodological background and published examples of what we consider transparent reporting We strongly recommend using the STROBE checklist in conjunction with the explanatory article, which is available freely on the web sites of PLoS Medicine (wwwplosmedicineorg), Annals of Internal Medicine (wwwannalsorg) and Epidemiology (wwwepidemcom) Development of the STROBE Statement We established the STROBE Initiative in 2004, obtained funding for a workshop and set up a web site (wwwstrobestatementorg) We searched textbooks, bibliographic databases, reference lists and personal files for relevant material, including previous recommendations, empirical studies of reporting and articles describing relevant methodological research …

267 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure

37,989 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations