scispace - formally typeset
Search or ask a question
Author

Peter C Gøtzsche

Bio: Peter C Gøtzsche is an academic researcher from Cochrane Collaboration. The author has contributed to research in topics: Systematic review & Placebo. The author has an hindex of 90, co-authored 413 publications receiving 147009 citations. Previous affiliations of Peter C Gøtzsche include University of Copenhagen & Copenhagen University Hospital.


Papers
More filters
Journal ArticleDOI
05 Jun 2002-JAMA
TL;DR: A substantial proportion of reviews had evidence of honorary and ghost authorship, and the Cochrane editorial teams contributed to most Cochrane reviews.
Abstract: ContextTo determine the prevalence of honorary and ghost authorship in Cochrane reviews, how authorship is assigned, and the ways in which authors and Cochrane editorial teams contribute.MethodsUsing a Web-based, self-administered survey, corresponding authors for 577 reviews published in issues 1 and 2 from 1999 of The Cochrane Library were invited to report on the prevalence of honorary and ghost authors, contributions by authors listed in the byline and members of Cochrane editorial teams, and identification of methods of assigning authorship. Responses were received for 362 reviews (63% response rate), which contained 913 authors.ResultsOne hundred forty-one reviews (39%) had evidence of honorary authors, 32 (9%) had evidence of ghost authors (most commonly a member of the Cochrane editorial team), and 9 (2%) had evidence of both honorary and ghost authors. The editorial teams contributed in a wide variety of ways to 301 reviews (83%). Authorship was decided by the group of authors (31%) or lead author (25%) in most reviews. Authorship order was assigned according to contribution in most reviews (76%). The 3 functions contributed to most by those listed in the byline were assessing the quality of included studies (83%), interpreting data (82%), and abstracting data from included studies (77%).ConclusionsA substantial proportion of reviews had evidence of honorary and ghost authorship. The Cochrane editorial teams contributed to most Cochrane reviews.

229 citations

Journal ArticleDOI
24 Oct 1998-BMJ
TL;DR: Current chemical and physical methods aimed at reducing exposure to allergens from house dust mites seem to be ineffective and cannot be recommended as prophylactic treatment for asthma patients sensitive to mites.
Abstract: Objective To determine whether patients with asthma who are sensitive to mites benefit from measures designed to reduce their exposure to house dust mite antigen in the home. Design Meta-analysis of randomised trials that investigated the effects on asthma patients of chemical or physical measures to control mites, or both, in comparison with an untreated control group. All trials in any language were eligible for inclusion. Subjects Patients with bronchial asthma as diagnosed by a doctor and sensitisation to mites as determined by skin prick testing, bronchial provocation testing, or serum assays for specific IgE antibodies. Main outcome measures Number of patients whose allergic symptoms improved, improvement in asthma symptoms, improvement in peak expiratory flow rate. Outcomes measured on different scales were combined using the standardised effect size method (the difference in effect was divided by the standard deviation of the measurements). Results 23 studies were included in the meta-analysis; 6 studies used chemical methods to reduce exposure to mites, 13 used physical methods, and 4 used a combination. Altogether, 41/113 patients exposed to treatment interventions improved compared with 38/117 in the control groups (odds ratio 1.20, 95% confidence interval 0.66 to 2.18). The standardised mean difference for improvement in asthma symptoms was −0.06 (95% confidence interval −0.54 to 0.41). For peak flow rate measured in the morning the standardised mean difference was −0.03 (−0.25 to 0.19). As measured in the original units this difference between the treatment and the control group corresponds to −3 l/min (95% confidence interval −25 l/min to 19 l/min). The results were similar in the subgroups of trials that reported successful reduction in exposure to mites or had long follow up times. Conclusion Current chemical and physical methods aimed at reducing exposure to allergens from house dust mites seem to be ineffective and cannot be recommended as prophylactic treatment for asthma patients sensitive to mites.

215 citations

Journal ArticleDOI
12 Sep 1987-BMJ
TL;DR: In this article, the authors examined double blind trials of two or more non-steroidal anti-inflammatory drugs in rheumatoid arthritis to see whether there was any bias in the references they cited.
Abstract: Articles published before 1985 describing double blind trials of two or more non-steroidal anti-inflammatory drugs in rheumatoid arthritis were examined to see whether there was any bias in the references they cited. Althogether 244 articles meeting the criteria were found through a Medline search and through examining the reference lists of the articles retrieved. The drugs compared in the studies were classified as new or as control drugs and the outcome of the trial as positive or not positive. The reference lists of all papers with references to other trials on the new drug were then examined for reference bias. Positive bias was judged to have occurred if the reference list contained a higher proportion of references with a positive outcome for that drug than among all the articles assumed to have been available to the authors (those published more than two years earlier than the index article). Altogether 133 of the 244 articles were excluded for various reasons--for example, 44 because of multiple publication and 19 because they had no references. Among the 111 articles analysed bias was not possible in the references of 35 (because all the references gave the same outcome); 10 had a neutral selection of references, 22 a negative selection, and 44 a positive selection--a significant positive bias. This bias was not caused by better scientific standing of the cited articles over the uncited ones. Thus retrieving literature by scanning reference lists may produce a biased sample of articles, and reference bias may also render the conclusions of an article less reliable.

207 citations

Journal ArticleDOI
24 Mar 2010-BMJ
TL;DR: The reductions in breast cancer mortality the authors observed in screening regions were similar or less than those in non-screened areas and in age groups too young to benefit from screening, and are more likely explained by changes in risk factors and improved treatment than by screening mammography.
Abstract: Objective To determine whether the previously observed 25% reduction in breast cancer mortality in Copenhagen following the introduction of mammography screening was indeed due to screening, by using an additional screening region and five years additional follow-up. Design We used Poisson regression analyses adjusted for changes in age distribution to compare the annual percentage change in breast cancer mortality in areas where screening was used with the change in areas where it was not used during 10 years before screening was introduced and for 10 years after screening was in practice (starting five years after introduction of screening). Setting Copenhagen, where mammography screening started in 1991, and Funen county, where screening was introduced in 1993. The rest of Denmark (about 80% of the population) served as an unscreened control group. Participants All Danish women recorded in the Cause of Death Register and Statistics Denmark for 1971-2006. Main outcome measure Annual percentage change in breast cancer mortality in regions offering mammography screening and those not offering screening. Results In women who could benefit from screening (ages 55-74 years), we found a mortality decline of 1% per year in the screening areas (relative risk (RR) 0.99, 95% confidence interval (CI) 0.96 to 1.01) during the 10 year period when screening could have had an effect (1997-2006). In women of the same age in the non-screening areas, there was a decline of 2% in mortality per year (RR 0.98, 95% CI 0.97 to 0.99) in the same 10 year period. In women who were too young to benefit from screening (ages 35-55 years), breast cancer mortality during 1997-2006 declined 5% per year (RR 0.95, CI 0.92 to 0.98) in the screened areas and 6% per year (RR 0.94, CI 0.92 to 0.95) in the non-screened areas. For the older age groups (75-84 years), there was little change in breast cancer mortality over time in both screened and non-screened areas. Trends were less clear during the 10 year period before screening was introduced, with a possible increase in mortality in women aged less than 75 years in the non-screened regions. Conclusions We were unable to find an effect of the Danish screening programme on breast cancer mortality. The reductions in breast cancer mortality we observed in screening regions were similar or less than those in non-screened areas and in age groups too young to benefit from screening, and are more likely explained by changes in risk factors and improved treatment than by screening mammography.

193 citations

Journal ArticleDOI
25 Jul 2007-JAMA
TL;DR: The high proportion of meta-analyses based on SMDs that show errors indicates that although the statistical process is ostensibly simple, data extraction is particularly liable to errors that can negate or even reverse the findings of the study.
Abstract: ContextMeta-analysis of trials that have used different continuous or rating scales to record outcomes of a similar nature requires sophisticated data handling and data transformation to a uniform scale, the standardized mean difference (SMD). It is not known how reliable such meta-analyses are.ObjectiveTo study whether SMDs in meta-analyses are accurate.Data SourcesSystematic review of meta-analyses published in 2004 that reported a result as an SMD, with no language restrictions. Two trials were randomly selected from each meta-analysis. We attempted to replicate the results in each meta-analysis by independently calculating SMD using Hedges adjusted g.Data ExtractionOur primary outcome was the proportion of meta-analyses for which our result differed from that of the authors by 0.1 or more, either for the point estimate or for its confidence interval, for at least 1 of the 2 selected trials. We chose 0.1 as cut point because many commonly used treatments have an effect of 0.1 to 0.5, compared with placebo.ResultsOf the 27 meta-analyses included in this study, we could not replicate the result for at least 1 of the 2 trials within 0.1 in 10 of the meta-analyses (37%), and in 4 cases, the discrepancy was 0.6 or more for the point estimate. Common problems were erroneous number of patients, means, standard deviations, and sign for the effect estimate. In total, 17 meta-analyses (63%) had errors for at least 1 of the 2 trials examined. For the 10 meta-analyses with errors of at least 0.1, we checked the data from all the trials and conducted our own meta-analysis, using the authors' methods. Seven of these 10 meta-analyses were erroneous (70%); 1 was subsequently retracted, and in 2 a significant difference disappeared or appeared.ConclusionsThe high proportion of meta-analyses based on SMDs that show errors indicates that although the statistical process is ostensibly simple, data extraction is particularly liable to errors that can negate or even reverse the findings of the study. This has implications for researchers and implies that all readers, including journal reviewers and policy makers, should approach such meta-analyses with caution.

188 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure

37,989 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations