scispace - formally typeset
Journal ArticleDOI

Increasing the Power of Nonparametric Tests by Detecting and Downweighting Outliers

Donald W. Zimmerman
- 01 Oct 1995 - 
- Vol. 64, Iss: 1, pp 71-78
TLDR
In this paper, the power of the Student t test and the Wilcoxon-Mann-Whitney test declines substantially when samples are obtained from outlier-prone densities, including mixed-normal, Cauchy, lognormal, and mixed-uniform densities.
Abstract
In this study, methods are examined that can be described, somewhat paradoxically, as robust nonparametric statistics. Although nonparametric tests effectively control the probability of Type I errors through rank randomization, they do not always control the probability of Type II errors and power, which can be grossly inflated or deflated by the shape of distributions. The power of the Student t test and the Wilcoxon-Mann-Whitney test declines substantially when samples are obtained from outlier-prone densities, including mixed-normal, Cauchy, lognormal, and mixed-uniform densities. However, the nonparametric test acquires an advantage, because outliers influence the t test to a relatively greater extent. Under these conditions, an outlier detection and downweighting (ODD) procedure, usually associated with parametric significance tests, augments the power of both the t test and the Wilcoxon-Mann-Whitney test.

read more

Citations
More filters
Journal ArticleDOI

The power of outliers (and why researchers should ALWAYS check for them)

TL;DR: The goal of this paper is to summarize the various potential causes of extreme scores in a data set, how to detect them, and whether they should be removed or not, and how significantly a small proportion of outliers can affect even simple analyses.
Journal ArticleDOI

Improving your data transformations: Applying the Box-Cox transformation

TL;DR: The Box-Cox transformation (Box & Cox, 1964) as mentioned in this paper is a family of power transformations that incorporates and extends the traditional options to help researchers easily find the optimal normalizing transformation for each variable.
Journal ArticleDOI

Notes on the use of data transformations.

TL;DR: In this paper, the authors focus on the use of three data transformations most commonly discussed in statistics texts (square root, log, and inverse) for improving the normality of variables.
Journal ArticleDOI

Knowledge withholding intentions in teams

TL;DR: This work investigates factors that influence team members' knowledge withholding intentions (KWI), and proposes that the social exchange relationships that individuals form in the workplace, their perceptions of justice, and their knowledge withholding self-efficacy would influence their knowledge-withholding intentions.
Journal ArticleDOI

A psychometric study of six self-report measures for use with sexual offenders with cognitive and social functioning deficits

TL;DR: Lindsay et al. as discussed by the authors examined the psychometric properties of six assessment measures, which were specifically adapted for the lower functioning offender from existing sexual offender assessments, and explored the treatment change.
References
More filters
Book

Outliers in Statistical Data

Vic Barnett, +1 more
TL;DR: In this article, the authors present an updated version of the reference work on outliers, including new areas of study such as outliers in direction data as well as developments in fields such as discordancy tests for univariate and multivariate samples.
Book

Robust statistics: the approach based on influence functions

TL;DR: This paper presents a meta-modelling framework for estimating the values of Covariance Matrices and Multivariate Location using one-Dimensional and Multidimensional Estimators.
Journal ArticleDOI

Rank Transformations as a Bridge between Parametric and Nonparametric Statistics

TL;DR: Rank as mentioned in this paper is a nonparametric procedure that is applied to the ranks of the data instead of to the data themselves, and it can be viewed as a useful tool for developing non-parametric procedures to solve new problems.
Book

Identification of outliers

TL;DR: A computer normalizes the one or more sets of historical data points and creates a first visual representation corresponding to the first set of the oneor more sets and the second set of additional points.
Related Papers (5)