scispace - formally typeset
Search or ask a question
Author

Peter C Gøtzsche

Bio: Peter C Gøtzsche is an academic researcher from Cochrane Collaboration. The author has contributed to research in topics: Systematic review & Placebo. The author has an hindex of 90, co-authored 413 publications receiving 147009 citations. Previous affiliations of Peter C Gøtzsche include University of Copenhagen & Copenhagen University Hospital.


Papers
More filters
Journal ArticleDOI
TL;DR: An assay based on production of HIV antigen in cultures of CD4+ lymphocytes infected 'in vitro' with cell-free virus was established and it was possible to isolate, propagate and reliably determine the zidovudine susceptibility of HIV isolates from all patients despite differences in cellular tropism and syncytium inducing capacity.

11 citations

Journal ArticleDOI
05 Apr 2016-JAMA
TL;DR: Screening has not reduced total mortality and it is therefore misleading to claim that “screening saves lives” if recommendations are based on poor evidence, rather than the most reliable trials, interventions will continue to be used that lead to much harm, with little or no benefit.
Abstract: Breast Cancer Screening: Benefit or Harm? To the Editor In the systematic review of breast cancer screening, Dr Myers and colleagues1 claimed that our Cochrane review2 showed that breast screening reduces cause-specific mortality by 19%, there was no significant heterogeneity, and our results were similar to those of other reviews. This misrepresents our findings and creates an impression of scientific agreement that does not exist. It is also not correct that our estimate that 10 women were overdiagnosed for each avoided death from breast cancer was based on “all trials.” We documented important methodological differences and pronounced heterogeneity between results of poorly and adequately randomized trials (I2 = 78%). Trials with adequate randomization found little or no benefit (relative risk, 0.90 [95% CI, 0.79-1.02] for adequately randomized trials vs 0.75 [95% CI, 0.67-0.83] for poorly randomized trials).2 Other researchers have expressed similar concerns.3 When study methods provide a compelling explanation for substantial differences in results between studies, the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) handbook4 recommends using the estimates from trials with a lower risk of bias. However, Myers and colleagues did not properly consider the differences in quality of studies when they used GRADE. The doubtful benefit of breast screening at the population level was confirmed in observational studies from Norway and Denmark, the only countries that allow use of contemporary, same-age control groups.2 Observational studies without control groups are less reliable, no matter how well designed, and improved therapy can explain the entire observed mortality reduction over the past decades.5 When the benefit is overestimated, Cancer Intervention and Surveillance Modeling Network models of the balance between benefits and harms become misleading.1 The new American Cancer Society (ACS) guidelines are a step in the right direction but the insights that led to the recommendations are not new and they do not fully adopt the evidence-based approach. Overconfidence in flawed trials, fueled by economic conflicts of interest and good intentions, has led to many women being given diagnoses of breast cancer that they did not need, producing unwarranted fear and psychological stress and exposing them to treatment that can only harm them. Treatment of overdiagnosed, healthy women kills many of them, and total mortality is therefore the proper outcome. Screening has not reduced total mortality,2 and it is therefore misleading to claim that “screening saves lives.” If recommendations are based on poor evidence, rather than the most reliable trials, interventions will continue to be used that lead to much harm, with little or no benefit.

11 citations

Journal ArticleDOI
TL;DR: Trial registration, transparency and less reliance on industry trials are essential to reduce the number of trials and improve the quality of trials.
Abstract: Trial registration, transparency and less reliance on industry trials are essential.

11 citations

Journal ArticleDOI

11 citations

Journal ArticleDOI
30 Mar 2012-PLOS ONE
TL;DR: Information from Danish providers of health checks was sparse and tests were often offered against existing evidence or despite lack of evidence, but evidence supporting screening using body-mass-index, blood pressure, cholesterol, and faecal occult blood testing was found.
Abstract: Objective To investigate whether Danish providers of general health checks present a balanced account of possible benefits and harms on their websites and whether the health checks are evidence-based

11 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure

37,989 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations