scispace - formally typeset
Open Access

Robust misinterpretation of confidence intervals

Reads0
Chats0
TLDR
Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs, which suggests that many researchers do not know the correct interpretation of a CI.
Abstract
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students—all in the field of psychology—were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers’ performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.

read more

Citations
More filters
Journal ArticleDOI

The fallacy of placing confidence in confidence intervals

TL;DR: It is shown in a number of examples that CIs do not necessarily have any of the properties of confidence intervals, and can lead to unjustified or arbitrary inferences, and is suggested that other theories of interval estimation should be used instead.
Journal ArticleDOI

Ordinal Regression Models in Psychology: A Tutorial

TL;DR: In psychology, ordinal variables, although extremely common in psychology, are almost exclusively analyzed with statistical models that falsely assume them to be metric as discussed by the authors, which can lead to distorted effect.
Journal ArticleDOI

Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data

TL;DR: It is shown that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion, and Bayesian inference methods are used for estimating the posterior distribution of the parameters of the psychometric function.
Journal ArticleDOI

The philosophy of Bayes’ factors and the quantification of statistical evidence

TL;DR: In this article, the authors explore the concept of statistical evidence and how it can be quantified using the Bayes factor, and discuss the philosophical issues inherent in the use of the BFA.
References
More filters
Journal ArticleDOI

Researchers misunderstand confidence intervals and standard error bars.

TL;DR: Results suggest that many leading researchers have severe misconceptions about how error bars relate to statistical significance, do not adequately distinguish CIs and SE bars, and do not appreciate the importance of whether the 2 means are independent or come from a repeated measures design.
Book ChapterDOI

Confidence Intervals vs Bayesian Intervals

TL;DR: For many years, statistics textbooks have followed this canonical procedure: (1) the reader is warned not to use the discredited methods of Bayes and Laplace, (2) an orthodox method is extolled as superior and applied to a few simple problems, (3) the corresponding Bayesian solutions are not worked out or described in any way as discussed by the authors.
Journal ArticleDOI

Significance tests die hard: The amazing persistence of a probabilistic misconception.

TL;DR: In this article, the authors present a critique of the flawed logical structure of statistical significance tests and analyze why, in spite of this faulty reasoning, the use of significance tests persists and identify the illusion of probabilistic proof by contradiction as a central stumbling block.

Misinterpretations of significance: A problem students share with their teachers?

TL;DR: This paper proposed a pedagogical concept to teach significance tests, which involves explaining the meaning of statistical significance in an appropriate way, and discussed six common misinterpretations to psychologists who work in German universities and found out that they are still surprisingly widespread among instructors who teach statistics to psychology students.
Journal ArticleDOI

On the Logic and Purpose of Significance Testing

TL;DR: In this article, the authors review the function that data analysis is supposed to serve in the social sciences, examine the ways in which these functions are performed by NHST, and evaluate interval-based estimation as an alternative to NHST.
Related Papers (5)