scispace - formally typeset
Search or ask a question
Posted Content

Scientific Utopia: II - Restructuring Incentives and Practices to Promote Truth Over Publishability

TL;DR: Strategies for improving scientific practices and knowledge accumulation are developed that account for ordinary human motivations and biases and can reduce the persistence of false findings.
Abstract: An academic scientist’s professional success depends on publishing. Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results. Prior reports demonstrate how these incentives inflate the rate of false effects in published science. When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulation. Previous suggestions to address this problem are unlikely to be effective. For example, a journal of negative results publishes otherwise unpublishable reports. This enshrines the low status of the journal and its content. The persistence of false findings can be meliorated with strategies that make the fundamental but abstract accuracy motive – getting it right – competitive with the more tangible and concrete incentive – getting it published. We develop strategies for improving scientific practices and knowledge accumulation that account for ordinary human motivations and self-serving biases.
Citations
More filters
Journal ArticleDOI
TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Abstract: A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.

5,683 citations

Journal ArticleDOI
TL;DR: This work argues for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives, in the hope that this will facilitate action toward improving the transparency, reproducible and efficiency of scientific research.
Abstract: Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.

1,951 citations

Journal ArticleDOI
13 Feb 2014-Nature
TL;DR: It turned out that the problem was not in the data or in Motyl's analyses, it lay in the surprisingly slippery nature of the P value, which is neither as reliable nor as objective as most scientists assume.
Abstract: P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume.

1,274 citations

Journal ArticleDOI
TL;DR: It is suggested that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses, and its effect seems to be weak relative to the real effect sizes being measured.
Abstract: A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.

852 citations

Journal ArticleDOI
TL;DR: The largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information is discussed, and design calculations in which the probability of an estimate being in the wrong direction and the magnitude of an effect might be overestimated are recommended.
Abstract: Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information.

824 citations

References
More filters
Book
01 Dec 1969
TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Abstract: Contents: Prefaces. The Concepts of Power Analysis. The t-Test for Means. The Significance of a Product Moment rs (subscript s). Differences Between Correlation Coefficients. The Test That a Proportion is .50 and the Sign Test. Differences Between Proportions. Chi-Square Tests for Goodness of Fit and Contingency Tables. The Analysis of Variance and Covariance. Multiple Regression and Correlation Analysis. Set Correlation and Multivariate Methods. Some Issues in Power Analysis. Computational Procedures.

115,069 citations


"Scientific Utopia: II - Restructuri..." refers background in this paper

  • ...For example, the value of reporting effect sizes has been widely disseminated (Cohen, 1962, 1969, 1992; Wilkinson and Task Force on Statistical Inference, 1999)....

    [...]

Journal ArticleDOI
Jacob Cohen1
TL;DR: A convenient, although not comprehensive, presentation of required sample sizes is providedHere the sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests.
Abstract: One possible reason for the continued neglect of statistical power analysis in research in the behavioral sciences is the inaccessibility of or difficulty with the standard material. A convenient, although not comprehensive, presentation of required sample sizes is provided here. Effect-size indexes and conventional values for these are given for operationally defined small, medium, and large effects. The sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests: (a) the difference between independent means, (b) the significance of a product-moment correlation, (c) the difference between independent rs, (d) the sign test, (e) the difference between independent proportions, (f) chi-square tests for goodness of fit and contingency tables, (g) one-way analysis of variance, and (h) the significance of a multiple or multiple partial correlation.

38,291 citations


"Scientific Utopia: II - Restructuri..." refers background in this paper

  • ...For example, the value of reporting effect sizes has been widely disseminated (Cohen, 1962, 1969, 1992; Wilkinson and Task Force on Statistical Inference, 1999)....

    [...]

Book
01 Jan 1962
TL;DR: The Structure of Scientific Revolutions as discussed by the authors is a seminal work in the history of science and philosophy of science, and it has been widely cited as a major source of inspiration for the present generation of scientists.
Abstract: A good book may have the power to change the way we see the world, but a great book actually becomes part of our daily consciousness, pervading our thinking to the point that we take it for granted, and we forget how provocative and challenging its ideas once were-and still are. "The Structure of Scientific Revolutions" is that kind of book. When it was first published in 1962, it was a landmark event in the history and philosophy of science. And fifty years later, it still has many lessons to teach. With "The Structure of Scientific Revolutions", Kuhn challenged long-standing linear notions of scientific progress, arguing that transformative ideas don't arise from the day-to-day, gradual process of experimentation and data accumulation, but that revolutions in science, those breakthrough moments that disrupt accepted thinking and offer unanticipated ideas, occur outside of "normal science," as he called it. Though Kuhn was writing when physics ruled the sciences, his ideas on how scientific revolutions bring order to the anomalies that amass over time in research experiments are still instructive in our biotech age. This new edition of Kuhn's essential work in the history of science includes an insightful introductory essay by Ian Hacking that clarifies terms popularized by Kuhn, including paradigm and incommensurability, and applies Kuhn's ideas to the science of today. Usefully keyed to the separate sections of the book, Hacking's essay provides important background information as well as a contemporary context. Newly designed, with an expanded index, this edition will be eagerly welcomed by the next generation of readers seeking to understand the history of our perspectives on science.

36,808 citations


"Scientific Utopia: II - Restructuri..." refers methods in this paper

  • ...This democratizing function for acquiring knowledge made replication a central principle of the scientific method from before Bacon to the present (e.g., al Haytham, 1021, as translated by Sabra, 1989; Jasny, Chin, Chong, & Vignieri, 2011; Kuhn, 1962; Lakatos, 1978; Popper, 1934; Rosenthal, 1991; Schmidt, 2009)....

    [...]

  • ...…the scientific method from before Bacon to the present (e.g., al Haytham, 1021, as translated by Sabra, 1989; Jasny, Chin, Chong, & Vignieri, 2011; Kuhn, 1962; Lakatos, 1978; Popper, 1934; Rosenthal, 1991; Schmidt, 2009).4 Replication is so central to science that it may serve as a “demarcation…...

    [...]

Book
01 Jan 1934
TL;DR: The Open Society and Its Enemies as discussed by the authors is regarded as one of Popper's most enduring books and contains insights and arguments that demand to be read to this day, as well as many of the ideas in the book.
Abstract: Described by the philosopher A.J. Ayer as a work of 'great originality and power', this book revolutionized contemporary thinking on science and knowledge. Ideas such as the now legendary doctrine of 'falsificationism' electrified the scientific community, influencing even working scientists, as well as post-war philosophy. This astonishing work ranks alongside The Open Society and Its Enemies as one of Popper's most enduring books and contains insights and arguments that demand to be read to this day.

7,904 citations

Trending Questions (2)
What are the potential benefits of promotional incentives to scientific publications?

The paper does not mention any potential benefits of promotional incentives to scientific publications.

What are the effects of promotional incentives on scientific publications?

The effects of promotional incentives on scientific publications are that they encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results, leading to an inflation of false effects in published science.