scispace - formally typeset
Search or ask a question
Author

Peter C Gøtzsche

Bio: Peter C Gøtzsche is an academic researcher from Cochrane Collaboration. The author has contributed to research in topics: Systematic review & Placebo. The author has an hindex of 90, co-authored 413 publications receiving 147009 citations. Previous affiliations of Peter C Gøtzsche include University of Copenhagen & Copenhagen University Hospital.


Papers
More filters
Journal ArticleDOI
04 Jun 2014-BMJ
TL;DR: Clinical study reports should be used as the data source for systematic reviews of drugs, but they should first be checked against protocols and within themselves for accuracy and consistency.
Abstract: Objective To determine, using research on duloxetine for major depressive disorder as an example, if there are inconsistencies between protocols, clinical study reports, and main publicly available sources (journal articles and trial registries), and within clinical study reports themselves, with respect to benefits and major harms Design Data on primary efficacy analysis and major harms extracted from each data source and compared Setting Nine randomised placebo controlled trials of duloxetine (total 2878 patients) submitted to the European Medicines Agency (EMA) for marketing approval for major depressive disorder Data sources Clinical study reports, including protocols as appendices (total 13 729 pages), were obtained from the EMA in May 2011 Journal articles were identified through relevant literature databases and contacting the manufacturer, Eli Lilly Clinicaltrialsgov and the manufacturer’s online clinical trial registry were searched for trial results Results Clinical study reports fully described the primary efficacy analysis and major harms (deaths (including suicides), suicide attempts, serious adverse events, and discontinuations because of adverse events) There were minor inconsistencies in the population in the primary efficacy analysis between the protocol and clinical study report and within the clinical study report for one trial Furthermore, we found contradictory information within the reports for seven serious adverse events and eight adverse events that led to discontinuation but with no apparent bias In each trial, a median of 406 (range 177-645) and 166 (100-241) treatment emergent adverse events (adverse events that emerged or worsened after study drug was started) in the randomised phase were not reported in journal articles and Lilly trial registry reports, respectively We also found publication bias in relation to beneficial effects Conclusion Clinical study reports contained extensive data on major harms that were unavailable in journal articles and in trial registry reports There were inconsistencies between protocols and clinical study reports and within clinical study reports Clinical study reports should be used as the data source for systematic reviews of drugs, but they should first be checked against protocols and within themselves for accuracy and consistency

64 citations

Journal ArticleDOI
TL;DR: Vaccines against Pseudomonas aeruginosa cannot be recommended and three included trials comprised 483, 476 and 37 patients, respectively, and one patient was reported to have died in the observation period.
Abstract: Background Chronic pulmonary infection in cystic fibrosis results in progressive lung damage. Once colonisation of the lungs with Pseudomonas aeruginosa occurs, it is almost impossible to eradicate. Vaccines, aimed at reducing infection with Pseudomonas aeruginosa, have been developed. This is an update of a previously published review. Objectives To assess the effectiveness of vaccination against Pseudomonas aeruginosa in cystic fibrosis. Search methods We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register using the terms vaccines AND pseudomonas (last search 30 March 2015). We previously searched PubMed using the terms vaccin* AND cystic fibrosis (last search 30 May 2013). Selection criteria Randomised trials (published or unpublished) comparing Pseudomonas aeruginosa vaccines (oral, parenteral or intranasal) with control vaccines or no intervention in cystic fibrosis. Data collection and analysis The authors independently selected trials, assessed them and extracted data. Main results Six trials were identified. Two trials were excluded since they were not randomised and one old, small trial because it was not possible to assess whether is was randomised. The three included trials comprised 483, 476 and 37 patients, respectively. No data have been published from one of the large trials, but the company stated in a press release that the trial failed to confirm the results from an earlier study and that further clinical development was suspended. In the other large trial, relative risk for chronic infection was 0.91 (95% confidence interval 0.55 to 1.49), and in the small trial, the risk was also close to one. In the large trial, one patient was reported to have died in the observation period. In that trial, 227 adverse events (4 severe) were registered in the vaccine group and 91 (1 severe) in the control group. In this large trial of a vaccine developed against flagella antigens, antibody titres against the epitopes contained in the vaccine were higher in the vaccine group compared to the placebo group (P < 0.0001). Authors' conclusions Vaccines against Pseudomonas aeruginosa cannot be recommended.

63 citations

Journal ArticleDOI
TL;DR: People with rheumatoid arthritis and the researchers in the study did prefer non-steroidal anti-inflammatory drugs more than acetaminophen/paracetamol, and there is a need for a large trial with appropriate randomisation, double-blinding, and with explicit methods to measure and analyse pain and adverse effects.
Abstract: Background Nonsteroidal anti-inflammatory drugs (NSAIDs) are usually preferred for simple analgesics such as paracetamol for rheumatoid arthritis. It is not clear, however, whether the trade-offs between benefits and harms of NSAIDs are preferable to those of paracetamol (paracetamol is also called acetaminophen). Objectives To compare the benefits and harms of paracetamol with NSAIDs in patients with rheumatoid arthritis. Search methods PubMed and EMBASE databases were searched up until August 2007. Reference lists of identified articles were also searched. Selection criteria Randomised double-blind studies comparing paracetamol with an NSAID. Data collection and analysis Decisions on inclusion of trials and data extraction were performed by the two authors independently. Main results Four cross-over studies, published between 1968 and 1982, involving 121 patients, and four different NSAIDs were included. The generation of the allocation sequence and the use of methods to conceal the allocation were not described in any of the studies. The studies were double-blind but it was not clear whether the blinding was effective. Methods for collecting adverse effects were not described. The NSAIDs were preferred more often than paracetamol by the patients or the investigator. In the largest trial, 20 out of 54 patients (37%) preferred ibuprofen and 7 out of 54 (13%) paracetamol. Investigators preference (as established by joint tenderness, grip strength and joint circumference) was 17 out of 35 for diclofenac versus 5 out of 35 for paracetamol in another trial. However, because of the weaknesses in the trials, no firm conclusion can be drawn. Authors' conclusions When considering the trade off between the benefits and harms of non-steroidal anti-inflammatory drugs and paracetamol/acetaminophen, it is not known whether one is better than the other for rheumatoid arthritis. But people with rheumatoid arthritis and the researchers in the study did prefer non-steroidal anti-inflammatory drugs more than acetaminophen/paracetamol. There is a need for a large trial, with appropriate randomisation, double-blinding, test of the success of the blinding, and with explicit methods to measure and analyse pain and adverse effects.

62 citations

Journal ArticleDOI
31 Oct 1998-BMJ
TL;DR: The British Medical Research Council's trial of streptomycin for pulmonary tuberculosis, published in 1948, has been proposed as the first randomised trial in which random numbers were used and allocation of patients was effectively concealed.
Abstract: The British Medical Research Council's trial of streptomycin for pulmonary tuberculosis, published in 1948,1 has been proposed as the first randomised trial in which random numbers were used and allocation of patients was effectively concealed. Before 1948 several randomised trials had been reported,2 but the method of randomisation was either not stated3 or was open to selection bias—for example, randomisation with use of a deck of cards.4 The earliest of these trials was published in 1898.5 It investigated the effect of serum treatment on diphtheria and was conducted by the Danish Nobel laureate, Johannes Fibiger. It was the first clinical trial in which random allocation was used and emphasised as a pivotal methodological principle. This pioneering improvement in methodology, combined with a large number of patients and rigorous planning, conduct, and reporting, makes the trial a milestone in the history of clinical trials. Fibiger's trial was published in Danish and its method of randomisation has often been quoted incorrectly. We have translated central passages into English (available on the BMJ website at www.bmj.com) and discussed its methodological merit. ### Summary points A large randomised clinical trial was performed as early as 1898 Random allocation was emphasised as a central methodological principle Patients were allocated to serum or no serum according to day of admittance, which created two comparable groups The planning, conduct, and reporting of the trial was of high quality The efficacy of serum treatment on diphtheria was shown The trial was the first properly conducted controlled clinical trial Johannes A G Fibiger (1867-1928) was born in Silkeborg, Denmark (figure). After receiving his medical degree in 1890 from the University of Copenhagen he visited Robert Koch and Emil von Behring in Germany. In 1895 Fibiger was awarded a doctoral degree for a thesis on diphtheria from the …

62 citations

Book
21 Jan 2012
TL;DR: This book discusses important issues in cancer screening, randomised trials, observational studies and a little statistics, including Lynge's studies on overdiagnosis and overtreatment, andcriticism of the authors' work in the Journal of Surgical Oncology.
Abstract: Foreword by Iona Heath. Foreword by Fran Visco. Acknowledgements. Introduction. What it really means to be 'controversial'. Our collaboration with the media. Important issues in cancer screening. What it means 'to have cancer'. Overdiagnosis and overtreatment. Erroneous diagnoses and carcinoma in situ. Basic issues in cancer epidemiology. Randomised trials, observational studies and a little statistics. Why screening leads to misleading survival statistics. Why 10--year survival is also misleading. Does screening work in Sweden? Stonewalling the Cochrane report on screening. The Danish National Board of Health interferes with our report. Troubling results in the Lancet. The Canadian trials. Media storm. Email from researchers. Our collaboration with the trialists. Ten letters to the editor. Creative manipulations in Sweden. Peter Dean, a remarkable character. Bad manners also in Norway. Continued troubles in Denmark. Harms dismissed by the Cochrane Breast Cancer Group. The process with the Cochrane review. Of mites and men. Confusion over who is in charge. The Lancet publishes the harms of screening. Vitriolic mass email from Peter Dean. Beating about the bush in the United Kingdom. Condemnations in Sweden. Contempt of science in Denmark and Norway. Delayed media storm in the United States after our 2001 reviews. Miettinen and Henschke's cherry--picking in the Lancet. Additional reactions in the United States. The Danish National Board of Health circles the wagons. US and Swedish 2002 meta--analyses. US Preventive Services Task Force's meta--analysis. Nystrom's updated Swedish meta--analysis. Scientific debates in the United States. Peter Dean is wrong again. Multiple errors in the International Journal of Epidemiology. Publication of entire Cochrane review obstructed for 5 years. Cochrane editors stonewall our Cochrane review. Lessons for the future. Welcome results in France. Editorial misconduct in the European Journal of Cancer. Editorial misconduct. Threats, intimidation and falsehoods. Debates in the Scientist and the Cancer Letter. Tabar's 'beyond reason' studies. Criticism of our work in the Journal of Surgical Oncology. Other observational studies of breast cancer mortality. The United States and the United Kingdom. Denmark, Lynge's 2005 study. Denmark, our 2010 study. Overdiagnosis and overtreatment. Cancers that regress spontaneously. The 1986 UK Forrest report. Overdiagnosis in the randomised trials. Systematic review of overdiagnosis in observational studies. Observational studies from Denmark and New South Wales. The doubt industry. Duffy's studies on overdiagnosis. Lynge's studies on overdiagnosis. Carcinoma in situ and the increase in mastectomies. Ad hominem attacks: a measure of desperation? UK statistician publishes in Danish. Inappropriate name--dropping. Further ad hominem arguments. Lynge's unholy mixture of politics and science. Ad hominem attacks ad infinitum. US recommendations for women aged 40 - 49 years. What have women been told? Website information on screening. Invitations to screening. A scandalous revision of the Danish screening leaflet. Our screening leaflet. Breast screening: the facts, or maybe not. American Cancer Society. Information from other cancer societies. Getting funding or not getting funding. What do women believe?. Extraordinary exaggerations. What is the ratio between benefits and harms? Duffy's 'funny' numbers. Exaggerating 25--fold. The exaggerations finally backfire. The ultimate exaggeration. Tabar threatens the BMJ with litigation. Falsehoods and perceived censorship in Sweden. Celebrating 20 years of breast screening in the United Kingdom. Can screening work? Plausible effect based on tumour sizes in the trials. Lead time. Plausible effect based on tumour stages in the trials. No decrease in advanced cancers. Where is screening at today? Problems with reading mammograms. False promises. Important information is being ignored. Beliefs warp evidence at conferences. Does breast screening make women live longer? Where next? Is screening a religion? A press release from Radiology that wasn't. Has all my struggle achieved anything? Why has so much evidence about screening been distorted? Time to stop breast cancer screening. Appendix 1: Tabar's explanations in the Cancer Letter and our replies. Appendix 2: Our 2008 mam-mography screening leaflet. Appendix 3: The press release Radiology withdrew at the last minute. Index.

61 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure

37,989 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations