Author
Ingram Olkin
Other affiliations: University of British Columbia, Michigan State University, American University ...read more
Bio: Ingram Olkin is an academic researcher from Stanford University. The author has contributed to research in topics: Multivariate statistics & Multivariate normal distribution. The author has an hindex of 79, co-authored 288 publications receiving 74131 citations. Previous affiliations of Ingram Olkin include University of British Columbia & Michigan State University.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion should improve the usefulness ofMeta-an analyses for authors, reviewers, editors, readers, and decision makers.
Abstract: ObjectiveBecause of the pressure for timely, informed decisions in public health
and clinical practice and the explosion of information in the scientific literature,
research results must be synthesized. Meta-analyses are increasingly used
to address this problem, and they often evaluate observational studies. A
workshop was held in Atlanta, Ga, in April 1997, to examine the reporting
of meta-analyses of observational studies and to make recommendations to aid
authors, reviewers, editors, and readers.ParticipantsTwenty-seven participants were selected by a steering committee, based
on expertise in clinical practice, trials, statistics, epidemiology, social
sciences, and biomedical editing. Deliberations of the workshop were open
to other interested scientists. Funding for this activity was provided by
the Centers for Disease Control and Prevention.EvidenceWe conducted a systematic review of the published literature on the
conduct and reporting of meta-analyses in observational studies using MEDLINE,
Educational Research Information Center (ERIC), PsycLIT, and the Current Index
to Statistics. We also examined reference lists of the 32 studies retrieved
and contacted experts in the field. Participants were assigned to small-group
discussions on the subjects of bias, searching and abstracting, heterogeneity,
study categorization, and statistical methods.Consensus ProcessFrom the material presented at the workshop, the authors developed a
checklist summarizing recommendations for reporting meta-analyses of observational
studies. The checklist and supporting evidence were circulated to all conference
attendees and additional experts. All suggestions for revisions were addressed.ConclusionsThe proposed checklist contains specifications for reporting of meta-analyses
of observational studies in epidemiology, including background, search strategy,
methods, results, discussion, and conclusion. Use of the checklist should
improve the usefulness of meta-analyses for authors, reviewers, editors, readers,
and decision makers. An evaluation plan is suggested and research areas are
explored.
17,663 citations
•
01 Jan 1985
TL;DR: In this article, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Abstract: Preface. Introduction. Data Sets. Tests of Statistical Significance of Combined Results. Vote-Counting Methods. Estimation of a Single Effect Size: Parametric and Nonparametric Methods. Parametric Estimation of Effect Size from a Series of Experiments. Fitting Parametric Fixed Effect Models to Effect Sizes: Categorical Methods. Fitting Parametric Fixed Effect Models to Effect Sizes: General Linear Models. Random Effects Models for Effect Sizes. Multivariate Models for Effect Sizes. Combining Estimates of Correlation Coefficients. Diagnostic Procedures for Research Synthesis Models. Clustering Estimates of Effect Magnitude. Estimation of Effect Size When Not All Study Outcomes Are Observed. Meta-Analysis in the Physical and Biological Sciences. Appendix. References. Index.
9,769 citations
••
TL;DR: In this paper, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Abstract: Preface. Introduction. Data Sets. Tests of Statistical Significance of Combined Results. Vote-Counting Methods. Estimation of a Single Effect Size: Parametric and Nonparametric Methods. Parametric Estimation of Effect Size from a Series of Experiments. Fitting Parametric Fixed Effect Models to Effect Sizes: Categorical Methods. Fitting Parametric Fixed Effect Models to Effect Sizes: General Linear Models. Random Effects Models for Effect Sizes. Multivariate Models for Effect Sizes. Combining Estimates of Correlation Coefficients. Diagnostic Procedures for Research Synthesis Models. Clustering Estimates of Effect Magnitude. Estimation of Effect Size When Not All Study Outcomes Are Observed. Meta-Analysis in the Physical and Biological Sciences. Appendix. References. Index.
7,063 citations
•
06 Apr 2011
TL;DR: In this paper, Doubly Stochastic Matrices and Schur-Convex Functions are used to represent matrix functions in the context of matrix factorizations, compounds, direct products and M-matrices.
Abstract: Introduction.- Doubly Stochastic Matrices.- Schur-Convex Functions.- Equivalent Conditions for Majorization.- Preservation and Generation of Majorization.- Rearrangements and Majorization.- Combinatorial Analysis.- Geometric Inequalities.- Matrix Theory.- Numerical Analysis.- Stochastic Majorizations.- Probabilistic, Statistical, and Other Applications.- Additional Statistical Applications.- Orderings Extending Majorization.- Multivariate Majorization.- Convex Functions and Some Classical Inequalities.- Stochastic Ordering.- Total Positivity.- Matrix Factorizations, Compounds, Direct Products, and M-Matrices.- Extremal Representations of Matrix Functions.
6,641 citations
••
TL;DR: This report hopes this report will generate further thought about ways to improve the quality of reports of meta-analyses of RCTs and that interested readers, reviewers, researchers, and editors will use the QUOROM statement and generate ideas for its improvement.
4,767 citations
Cited by
More filters
••
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses
62,157 citations
•
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews.
Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7
In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1).
Box 1
Conceptual issues in the evolution from QUOROM to PRISMA
46,935 citations
••
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice?
Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis.
Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4
Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted?
A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …
45,105 citations
••
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure
37,989 citations
••
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.
33,234 citations