scispace - formally typeset
Search or ask a question

Showing papers by "Peter C Gøtzsche published in 2007"


Journal ArticleDOI
TL;DR: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) initiative developed recommendations on what should be included in an accurate and complete report of an observational study, resulting in a checklist of 22 items (the STROBE statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles.
Abstract: Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. 18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the Web sites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.

15,454 citations


Journal ArticleDOI
TL;DR: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study, resulting in a checklist of 22 items that relate to the title, abstract, introduction, methods, results, and discussion sections of articles.
Abstract: Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study’s generalizability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control and cross-sectional studies. We convened a two-day workshop, in September 2004, with methodologists, researchers and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results and discussion sections of articles. Eighteen items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the web sites of PLoS Medicine, Annals of Internal Medicine and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.

13,974 citations


Journal ArticleDOI
TL;DR: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study, resulting in a checklist of 22 items that relate to the title, abstract, introduction, methods, results, and discussion sections of articles.

9,603 citations


Journal ArticleDOI
TL;DR: The STROBE Statement is a checklist of items that should be addressed in articles reporting on the 3 main study designs of analytical epidemiology: cohort, casecontrol, and cross-sectional studies; these recommendations are not prescriptions for designing or conducting studies.
Abstract: Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalizability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover 3 main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors, to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. Eighteen items are common to all 3 study designs and 4 are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available at http://www.annals.org and on the Web sites of PLoS Medicine and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.

9,000 citations


Journal ArticleDOI
18 Oct 2007-BMJ
TL;DR: In this article, a group of methodologists, researchers, and journal editors sets out guidelines to improve reports of observational studies, which hampers assessment and makes it less useful.
Abstract: Poor reporting of research hampers assessment and makes it less useful. An international group of methodologists, researchers, and journal editors sets out guidelines to improve reports of observational studies

4,179 citations



Journal ArticleDOI
TL;DR: A checklist of items that should be addressed in Reports of Observational Studies in Epidemiology (STROBE) Statement, a general reporting recommendations for descriptive observational studies and studies that investigate associations between exposures and health outcomes is developed.
Abstract: Much medical research is observational. The reporting of observational studies is often of insufficient quality. Poor reporting hampers the assessment of the strengths and weaknesses of a study and the generalizability of its results. Taking into account empirical evidence and theoretical considerations, a group of methodologists, researchers, and editors developed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations to improve the quality of reporting of observational studies. The STROBE Statement consists of a checklist of 22 items, which relate to the title, abstract, introduction, methods, results, and discussion sections of articles. Eighteen items are common to cohort studies, case-control studies, and cross-sectional studies, and 4 are specific to each of the 3 study designs. The STROBE Statement provides guidance to authors about how to improve the reporting of observational studies and facilitates critical appraisal and interpretation of studies by reviewers, journal editors, and readers. This explanatory and elaboration document is intended to enhance the use, understanding, and dissemination of the STROBE Statement. The meaning and rationale for each checklist item are presented. For each item, 1 or several published examples and, where possible, references to relevant empirical studies and methodological literature are provided. Examples of useful flow diagrams are also included. The STROBE Statement, this document, and the associated Web site (www.strobe-statement.org) should be helpful resources to improve reporting of observational research.

2,813 citations


Journal ArticleDOI
TL;DR: The STROBE Statement provides guidance to authors about how to improve the reporting of observational studies and facilitates critical appraisal and interpretation of studies by reviewers, journal editors and readers.
Abstract: Much medical research is observational. The reporting of observational studies is often of insufficient quality. Poor reporting hampers the assessment of the strengths and weaknesses of a study and the generalizability of its results. Taking into account empirical evidence and theoretical considerations, a group of methodologists, researchers, and editors developed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations to improve the quality of reporting of observational studies. The STROBE Statement consists of a checklist of 22 items, which relate to the title, abstract, introduction, methods, results and discussion sections of articles. Eighteen items are common to cohort studies, case-control studies and cross-sectional studies and four are specific to each of the three study designs. The STROBE Statement provides guidance to authors about how to improve the reporting of observational studies and facilitates critical appraisal and interpretation of studies by reviewers, journal editors and readers.This explanatory and elaboration document is intended to enhance the use, understanding, and dissemination of the STROBE Statement. The meaning and rationale for each checklist item are presented. For each item, one or several published examples and, where possible, references to relevant empirical studies and methodological literature are provided. Examples of useful flow diagrams are also included. The STROBE Statement, this document, and the associated web site (http://www.strobe-statement.org) should be helpful resources to improve reporting of observational research.

2,020 citations


Journal ArticleDOI
TL;DR: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study as mentioned in this paper.
Abstract: Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalizability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. 18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed "Explanation and Elaboration" document is published separately and is freely available on the web sites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.

826 citations


Journal ArticleDOI
TL;DR: Two-thirds of conclusions in favour of one of the interventions were no longer supported if only trials with adequate allocation concealment were included, and the loss of support mainly reflected loss of power and a shift in the point estimate towards a less beneficial effect.
Abstract: Background Randomized trials without reported adequate allocation concealment have been shown to overestimate the benefit of experimental interventions. We investigated the robustness of conclusions drawn from meta-analyses to exclusion of such trials. Material Random sample of 38 reviews from The Cochrane Library 2003, issue 2 and 32 other reviews from PubMed accessed in 2002. Eligible reviews presented a binary effect estimate from a meta-analysis of randomized controlled trials as the first statistically significant result that supported a conclusion in favour of one of the interventions. Methods We assessed the methods sections of the trials in each included meta-analysis for adequacy of allocation concealment. We replicated each meta-analysis using the authors' methods but included only trials that had adequate allocation concealment. Conclusions were defined as not supported if our result was not statistically significant. Results Thirty-four of the 70 meta-analyses contained a mixture of trials with unclear or inadequate concealment as well as trials with adequate allocation concealment. Four meta-analyses only contained trials with adequate concealment, and 32, only trials with unclear or inadequate concealment. When only trials with adequate concealment were included, 48 of 70 conclusions (69%; 95% confidence interval: 56-79%) lost support. The loss of support mainly reflected loss of power (the total number of patients was reduced by 49%) but also a shift in the point estimate towards a less beneficial effect. Conclusion Two-thirds of conclusions in favour of one of the interventions were no longer supported if only trials with adequate allocation concealment were included.

436 citations


Journal Article
TL;DR: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement as mentioned in this paper is a checklist of items that should be addressed in articles reporting on the three main study designs of analytical epidemiology: cohort, case-control and cross-sectional studies.
Abstract: Introduction Many questions in medical research are investigated in observational studies (1) Much of the research into the cause of diseases relies on cohort, case-control or cross-sectional studies Observational studies also have a role in research into the benefits and harms of medical interventions (2) Randomized trials cannot answer all important questions about a given intervention For example, observational studies are more suitable to detect rare or late adverse effects of treatments, and are more likely to provide an indication of what is achieved in daily medical practice (3) Research should be reported transparently so that readers can follow what was planned, what was done, what was found, and what conclusions were drawn The credibility of research depends on a critical assessment by others of the strengths and weaknesses in study design, conduct and analysis Transparent reporting is also needed to judge whether and how results can be included in systematic reviews (4,5) However, in published observational research important information is often missing or unclear An analysis of epidemiological studies published in general medical and specialist journals found that the rationale behind the choice of potential confounding variables was often not reported (6) Only few reports of case-control studies in psychiatry explained the methods used to identify cases and controls (7) In a survey of longitudinal studies in stroke research, 17 of 49 articles (35%) did not specify the eligibility criteria (8) Others have argued that without sufficient clarity of reporting, the benefits of research might be achieved more slowly, (9) and that there is a need for guidance in reporting observational studies (10,11) Recommendations on the reporting of research can improve reporting quality The Consolidated Standards of Reporting Trials (CONSORT) Statement was developed in 1996 and revised five years later (12) Many medical journals supported this initiative, (13) which has helped to improve the quality of reports of randomized trials (14,15) Similar initiatives have followed for other research areas--eg for the reporting of meta-analyses of randomized trials (16) or diagnostic studies (17) We established a network of methodologists, researchers and journal editors to develop recommendations for the reporting of observational research: the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement Aims and use of the STROBE Statement The STROBE Statement is a checklist of items that should be addressed in articles reporting on the three main study designs of analytical epidemiology: cohort, case-control and cross-sectional studies The intention is solely to provide guidance on how to report observational research well: these recommendations are not prescriptions for designing or conducting studies Also, while clarity of reporting is a prerequisite to evaluation, the checklist is not an instrument to evaluate the quality of observational research Here we present the STROBE Statement and explain how it was developed In a detailed companion paper, the Explanation and Elaboration article, (18-20) we justify the inclusion of the different checklist items, and give methodological background and published examples of what we consider transparent reporting We strongly recommend using the STROBE checklist in conjunction with the explanatory article, which is available freely on the web sites of PLoS Medicine (wwwplosmedicineorg), Annals of Internal Medicine (wwwannalsorg) and Epidemiology (wwwepidemcom) Development of the STROBE Statement We established the STROBE Initiative in 2004, obtained funding for a workshop and set up a web site (wwwstrobestatementorg) We searched textbooks, bibliographic databases, reference lists and personal files for relevant material, including previous recommendations, empirical studies of reporting and articles describing relevant methodological research …

Journal ArticleDOI
TL;DR: Ghost authorship in industry-initiated trials is very common and its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.
Abstract: Background Ghost authorship, the failure to name, as an author, an individual who has made substantial contributions to an article, may result in lack of accountability. The prevalence and nature of ghost authorship in industry-initiated randomised trials is not known. Methods and Findings We conducted a cohort study comparing protocols and corresponding publications for industry-initiated trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. We defined ghost authorship as present if individuals who wrote the trial protocol, performed the statistical analyses, or wrote the manuscript, were not listed as authors of the publication, or as members of a study group or writing committee, or in an acknowledgment. We identified 44 industry-initiated trials. We did not find any trial protocol or publication that stated explicitly that the clinical study report or the manuscript was to be written or was written by the clinical investigators, and none of the protocols stated that clinical investigators were to be involved with data analysis. We found evidence of ghost authorship for 33 trials (75%; 95% confidence interval 60%–87%). The prevalence of ghost authorship was increased to 91% (40 of 44 articles; 95% confidence interval 78%–98%) when we included cases where a person qualifying for authorship was acknowledged rather than appearing as an author. In 31 trials, the ghost authors we identified were statisticians. It is likely that we have overlooked some ghost authors, as we had very limited information to identify the possible omission of other individuals who would have qualified as authors. Conclusions Ghost authorship in industry-initiated trials is very common. Its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.

Journal ArticleDOI
25 Jul 2007-JAMA
TL;DR: The high proportion of meta-analyses based on SMDs that show errors indicates that although the statistical process is ostensibly simple, data extraction is particularly liable to errors that can negate or even reverse the findings of the study.
Abstract: ContextMeta-analysis of trials that have used different continuous or rating scales to record outcomes of a similar nature requires sophisticated data handling and data transformation to a uniform scale, the standardized mean difference (SMD). It is not known how reliable such meta-analyses are.ObjectiveTo study whether SMDs in meta-analyses are accurate.Data SourcesSystematic review of meta-analyses published in 2004 that reported a result as an SMD, with no language restrictions. Two trials were randomly selected from each meta-analysis. We attempted to replicate the results in each meta-analysis by independently calculating SMD using Hedges adjusted g.Data ExtractionOur primary outcome was the proportion of meta-analyses for which our result differed from that of the authors by 0.1 or more, either for the point estimate or for its confidence interval, for at least 1 of the 2 selected trials. We chose 0.1 as cut point because many commonly used treatments have an effect of 0.1 to 0.5, compared with placebo.ResultsOf the 27 meta-analyses included in this study, we could not replicate the result for at least 1 of the 2 trials within 0.1 in 10 of the meta-analyses (37%), and in 4 cases, the discrepancy was 0.6 or more for the point estimate. Common problems were erroneous number of patients, means, standard deviations, and sign for the effect estimate. In total, 17 meta-analyses (63%) had errors for at least 1 of the 2 trials examined. For the 10 meta-analyses with errors of at least 0.1, we checked the data from all the trials and conducted our own meta-analysis, using the authors' methods. Seven of these 10 meta-analyses were erroneous (70%); 1 was subsequently retracted, and in 2 a significant difference disappeared or appeared.ConclusionsThe high proportion of meta-analyses based on SMDs that show errors indicates that although the statistical process is ostensibly simple, data extraction is particularly liable to errors that can negate or even reverse the findings of the study. This has implications for researchers and implies that all readers, including journal reviewers and policy makers, should approach such meta-analyses with caution.

Journal ArticleDOI
TL;DR: 2 fundamental principles that physicians should remember when thinking about screening are highlighted: survival is always prolonged by early detection, even when deaths are not delayed nor any lives saved, and randomized trials are the only way to reliably determine whether screening does more good than harm.
Abstract: Last year, the New England Journal of Medicine ran a lead article reporting that patients with lung cancer had a 10-year survival approaching 90% if detected by screening spiral computed tomography. The publication garnered considerable media attention, and some felt that its findings provided a persuasive case for the immediate initiation of lung cancer screening. We strongly disagree. In this article, we highlight 4 reasons why the publication does not make a persuasive case for screening: the study had no control group, it lacked an unbiased outcome measure, it did not consider what is already known about this topic from previous studies, and it did not address the harms of screening. We conclude with 2 fundamental principles that physicians should remember when thinking about screening: (1) survival is always prolonged by early detection, even when deaths are not delayed nor any lives saved, and (2) randomized trials are the only way to reliably determine whether screening does more good than harm.

Journal ArticleDOI
TL;DR: Scientific articles tend to emphasize the major benefits of mammography screening over its major harms, and this imbalance is related to the authors' affiliation.
Abstract: The CONSORT statement specifies the need for a balanced presentation of both benefits and harms of medical interventions in trial reports. However, invitations to screening and newspaper articles often emphasize benefits and downplay or omit harms, and it is known that scientific articles can be influenced by conflicts of interest. We wanted to determine if a similar imbalance occurs in scientific articles on mammography screening and if it is related to author affiliation. We searched PubMed in April 2005 for articles on mammography screening that mentioned a benefit or a harm and that were published in 2004 in English. Data extraction was performed by three independent investigators, two unblinded and one blinded for article contents, and author names and affiliation, as appropriate. The extracted data were compared and discrepancies resolved by two investigators in a combined analysis. We defined three groups of authors: (1) authors in specialties unrelated to mammography screening, (2) authors in screening-affiliated specialties (radiology or breast cancer surgery) who were not working with screening, or authors funded by cancer charities, and (3) authors (at least one) working directly with mammography screening programmes. We used a data extraction sheet with 17 items described as important benefits and harms in the 2002 WHO/IARC-report on breast cancer screening. We identified 854 articles, and 143 were eligible for the study. Most were original research. Benefits were mentioned more often than harms (96% vs 62%, P < 0.001). Fifty-five (38%) articles mentioned only benefits, whereas seven (5%) mentioned only harms (P < 0.001). Overdiagnosis was mentioned in 35 articles (24%), but was more often downplayed or rejected in articles that had authors working with screening, (6/15; 40%) compared with authors affiliated by specialty or funding (1/6; 17%), or authors unrelated with screening (1/14; 7%) (P = 0.03). Benefits in terms of reduced breast cancer mortality were mentioned in 109 (76%) articles, and was more often provided as a relative risk reduction than an absolute risk reduction, where quantified (45 articles (31%) versus 6 articles (3%) (P < 0.001)). Scientific articles tend to emphasize the major benefits of mammography screening over its major harms. This imbalance is related to the authors' affiliation.

Journal ArticleDOI
TL;DR: This commentary re-analyzed trials included in a systematic review of randomized clinical trials that compared placebo with no treatment and concluded that B. E. Wampold et al.'s conclusion was not substantiated by their data, and is best characterized as powerful spin.
Abstract: B. E. Wampold, T. Minami, S. C. Tierney, T. W. Baskin, and K. S. Bhati (2005) re-analyzed trials included in our systematic review of randomized clinical trials that compared placebo with no treatment (A. Hrobjartsson & P. C. Gotzsche, 2001). Based on 11 trials, B. E. Wampold et al. concluded that " ... the placebo effect is robust" (p. 850). We (2001) concluded, based on 130 trials, that "we found little evidence in general that placebos have powerful clinical effects" (p. 1599). In this commentary, we examine the reasons for this discrepancy. For trials with continuous outcomes, our analyses (82 trials) and that of B. E. Wampold et al. (5 trials) resulted in pooled standardized mean differences that were small and essentially identical: -0.28 (95% confidence interval = -0.38 to -0.19) versus -0.29 (95% confidence interval = -0.52 to -0.06). There was considerable risk of bias (e.g., reporting bias, sample-size bias). Similarly, for trials with binary outcomes, our analysis (32 trials) and that of B. E. Wampold et al. (6 trials) found no statistically significant pooled effect of placebo interventions and were essentially identical: relative risk 0.95 (95% confidence interval = 0.88-1.02) versus odds ratio 0.99 (95% confidence interval = 0.81-1.23). Thus, B. E. Wampold et al.'s conclusion was not substantiated by their data, and is best characterized as powerful spin. .



Journal ArticleDOI
TL;DR: Hrobjartsson et al. as mentioned in this paper argue that an estimation of how robust the effect of placebo is should primarily rest on the magnitude and reproducibility of the effect.
Abstract: In an earlier comment (Hrobjartsson & Gotzsche, this issue) we pointed out that Wampold et al.'s conclusion “the placebo effect was robust” (2005) was not substantiated by their analysis, which came to essentially the same result as our original analysis (2001). Wampold et al. replied (this issue) that their conclusion was “… not made based on the magnitude of effect … but on a pattern of results interpreted in the context of the theory of placebo effects, the nature of the studies reviewed, other literature, and a pattern of results that corroborate predictions” (Wampold et al., this issue). In this follow-up commentary we argue that an estimation of how robust the effect of placebo is should primarily rest on the magnitude and reproducibility of the effect. We also comment on other aspects of Wampold et al.'s reply, for example that Wampold et al.'s critique of our review is not persuasive as their analysis came to essentially the same result, indicating that the difference in methodological and theoretical approaches had little importance. © 2007 Wiley Periodicals, Inc. J Clin Psychol 63: 405–408, 2007.

Journal ArticleDOI
TL;DR: Wai Leung et al. as discussed by the authors reported a case of small bowel enteropathy associated with chronic low-dose aspirin treatment, which was still present 3 months after aspirin withdrawal.