scispace - formally typeset
Open AccessJournal ArticleDOI

Robust misinterpretation of confidence intervals

Reads0
Chats0
TLDR
For instance, this paper found that researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. But their findings suggest that many researchers do not know the correct interpretation of a CI, even though they had not received any education on statistical inference.
Abstract
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students—all in the field of psychology—were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers’ performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

The fallacy of placing confidence in confidence intervals

TL;DR: It is shown in a number of examples that CIs do not necessarily have any of the properties of confidence intervals, and can lead to unjustified or arbitrary inferences, and is suggested that other theories of interval estimation should be used instead.
Journal ArticleDOI

Ordinal Regression Models in Psychology: A Tutorial

TL;DR: In psychology, ordinal variables, although extremely common in psychology, are almost exclusively analyzed with statistical models that falsely assume them to be metric as discussed by the authors, which can lead to distorted effect.
Journal ArticleDOI

Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data

TL;DR: It is shown that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion, and Bayesian inference methods are used for estimating the posterior distribution of the parameters of the psychometric function.
References
More filters
Journal ArticleDOI

The earth is round (p < .05)

TL;DR: The authors reviewed the problems with null hypothesis significance testing, including near universal misinterpretation of p as the probability that H is false, the misinterpretation that its complement is the probability of successful replication, and the mistaken assumption that if one rejects H₀ one thereby affirms the theory that led to the test.
Journal ArticleDOI

Statistical Methods in Psychology Journals: Guidelines and Explanations

TL;DR: The Task Force on Statistical Inference (TFSI) of the American Psychological Association (APA) as discussed by the authors was formed to discuss the application of significance testing in psychology journals and its alternatives, including alternative underlying models and data transformation.
Journal ArticleDOI

A practical solution to the pervasive problems of p values.

TL;DR: The BIC provides an approximation to a Bayesian hypothesis test, does not require the specification of priors, and can be easily calculated from SPSS output.
Journal ArticleDOI

The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis

TL;DR: The problem of post-experiment power calculation is discussed in this paper. But, the problem is extensive and present arguments to demonstrate the flaw in the logic, which is fundamentally flawed.
Journal ArticleDOI

Publication Manual of the American Psychological AssociationPublication Manual of the American Psychological Association.

TL;DR: The book provides stronger standards for maintaining the participant confidentiality and for reducing bias in language describing participants and suggests that researchers avoid the use of derogatory language such as using “minority” for “non-white” populations.
Related Papers (5)