scispace - formally typeset
Search or ask a question
Author

G. D. Williamson

Bio: G. D. Williamson is an academic researcher from Centers for Disease Control and Prevention. The author has contributed to research in topics: Observational study & Checklist. The author has an hindex of 2, co-authored 2 publications receiving 15247 citations.

Papers
More filters
Journal ArticleDOI
19 Apr 2000-JAMA
TL;DR: A checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion should improve the usefulness ofMeta-an analyses for authors, reviewers, editors, readers, and decision makers.
Abstract: ObjectiveBecause of the pressure for timely, informed decisions in public health and clinical practice and the explosion of information in the scientific literature, research results must be synthesized. Meta-analyses are increasingly used to address this problem, and they often evaluate observational studies. A workshop was held in Atlanta, Ga, in April 1997, to examine the reporting of meta-analyses of observational studies and to make recommendations to aid authors, reviewers, editors, and readers.ParticipantsTwenty-seven participants were selected by a steering committee, based on expertise in clinical practice, trials, statistics, epidemiology, social sciences, and biomedical editing. Deliberations of the workshop were open to other interested scientists. Funding for this activity was provided by the Centers for Disease Control and Prevention.EvidenceWe conducted a systematic review of the published literature on the conduct and reporting of meta-analyses in observational studies using MEDLINE, Educational Research Information Center (ERIC), PsycLIT, and the Current Index to Statistics. We also examined reference lists of the 32 studies retrieved and contacted experts in the field. Participants were assigned to small-group discussions on the subjects of bias, searching and abstracting, heterogeneity, study categorization, and statistical methods.Consensus ProcessFrom the material presented at the workshop, the authors developed a checklist summarizing recommendations for reporting meta-analyses of observational studies. The checklist and supporting evidence were circulated to all conference attendees and additional experts. All suggestions for revisions were addressed.ConclusionsThe proposed checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion. Use of the checklist should improve the usefulness of meta-analyses for authors, reviewers, editors, readers, and decision makers. An evaluation plan is suggested and research areas are explored.

17,663 citations

Journal ArticleDOI
TL;DR: To determine the relative merits of two quantitative methods used to estimate the summary effects of observational studies, the authors compared two methods of meta-analysis that quantified the relation between oral contraceptive use and the risk for ovarian cancer.
Abstract: To determine the relative merits of two quantitative methods used to estimate the summary effects of observational studies, the authors compared two methods of meta-analysis. Each quantified the relation between oral contraceptive use and the risk for ovarian cancer. One analysis consisted of a meta-analysis using summary data from 11 published studies from the literature (MAL) in which the study was the unit of analysis, and the second consisted of a meta-analysis using individual patient data (MAP) in which the patient was the unit of analysis. The authors found excellent quantitative agreement between the summary effect estimates from the MAL and the MAP. The MAP permits analysis 1) among outcomes, exposures, and confounders not investigated in the original studies, 2) when the original effect measures differ among studies and cannot be converted to a common measure (e.g., slopes vs. correlation coefficients), and 3) when there is a paucity of studies. The MAL permits analysis 1) when resources are limited, 2) when time is limited, and 3) when original study data are not available or are available only from a biased sample of studies. In public health epidemiology, data from original studies are often accessible only to limited numbers of research groups and for only a few types of studies that have high public health priority. Consequently, few opportunities for pooled analysis exist. However, from a policy view, MAL will provide answers to many questions and will help in identifying questions for future investigation.

146 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations.
Abstract: Background. Qualitative research explores complex phenomena encountered by clinicians, health care providers, policy makers and consumers. Although partial checklists are available, no consolidated reporting framework exists for any type of qualitative design. Objective. To develop a checklist for explicit and comprehensive reporting of qualitative studies (indepth interviews and focus groups). Methods. We performed a comprehensive search in Cochrane and Campbell Protocols, Medline, CINAHL, systematic reviews of qualitative studies, author or reviewer guidelines of major medical journals and reference lists of relevant publications for existing checklists used to assess qualitative studies. Seventy-six items from 22 checklists were compiled into a comprehensive list. All items were grouped into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting. Duplicate items and those that were ambiguous, too broadly defined and impractical to assess were removed. Results. Items most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting. Conclusions. The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations.

18,169 citations

Journal ArticleDOI
19 Apr 2000-JAMA
TL;DR: A checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion should improve the usefulness ofMeta-an analyses for authors, reviewers, editors, readers, and decision makers.
Abstract: ObjectiveBecause of the pressure for timely, informed decisions in public health and clinical practice and the explosion of information in the scientific literature, research results must be synthesized. Meta-analyses are increasingly used to address this problem, and they often evaluate observational studies. A workshop was held in Atlanta, Ga, in April 1997, to examine the reporting of meta-analyses of observational studies and to make recommendations to aid authors, reviewers, editors, and readers.ParticipantsTwenty-seven participants were selected by a steering committee, based on expertise in clinical practice, trials, statistics, epidemiology, social sciences, and biomedical editing. Deliberations of the workshop were open to other interested scientists. Funding for this activity was provided by the Centers for Disease Control and Prevention.EvidenceWe conducted a systematic review of the published literature on the conduct and reporting of meta-analyses in observational studies using MEDLINE, Educational Research Information Center (ERIC), PsycLIT, and the Current Index to Statistics. We also examined reference lists of the 32 studies retrieved and contacted experts in the field. Participants were assigned to small-group discussions on the subjects of bias, searching and abstracting, heterogeneity, study categorization, and statistical methods.Consensus ProcessFrom the material presented at the workshop, the authors developed a checklist summarizing recommendations for reporting meta-analyses of observational studies. The checklist and supporting evidence were circulated to all conference attendees and additional experts. All suggestions for revisions were addressed.ConclusionsThe proposed checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion. Use of the checklist should improve the usefulness of meta-analyses for authors, reviewers, editors, readers, and decision makers. An evaluation plan is suggested and research areas are explored.

17,663 citations

Journal ArticleDOI
TL;DR: The quality assessment of non-randomized studies is an important component of a thorough meta-analysis of non randomized studies and can dramatically influence the interpretation of meta-analyses, and can even reverse conclusions regarding the effectiveness of an intervention.
Abstract: The quality assessment of non-randomized studies is an important component of a thorough meta-analysis of nonrandomized studies. Low quality studies can lead to a distortion of the summary effect estimate. Recent guidelines for the reporting of meta-analyses of observational studies recommend the assessment of the study quality (MOOSE) [1]. In principal, three categories of quality assessments tools are available: scales, simple checklists, or checklists with a summary judgment (for details see Sanderson et al. 2007 [2]). The results of the quality assessment can be used in several ways such as forming inclusion criteria for the meta-analysis, informing a sensitivity analysis or metaregression, weighting studies, or highlighting areas of methodological quality poorly addressed by the included studies [3]. It has been criticized that the use of summary scores involve inherent weighting of component items including items that may not be related to the validity of the study findings [2]. Sanderson et al. [2] recently identified overall 86 tools for assessing the quality of non-randomized studies. Their review "highlighted the lack of a single obvious candidate tool for assessing quality of observational epidemiological studies" [2]. In the field of randomized trials, it has been shown that the choice of quality scale can dramatically influence the interpretation of meta-analyses, and can even reverse conclusions regarding the effectiveness of an intervention [4]. Wells et al. [5] proposed a scale for assessing the quality of published non-randomized studies in meta-analyses,

10,420 citations

Journal ArticleDOI
TL;DR: A more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science is presented, and forthright advice on controversial or novel issues is offered.
Abstract: Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.

6,467 citations