scispace - formally typeset
Search or ask a question

Showing papers on "Outlier published in 1993"


Journal ArticleDOI
TL;DR: It is concluded using quantitative examples that robust measures are much less affected by outliers and cutoffs than measures based on moments, and fitting explicit distribution functions as a way of recovering means and standard deviations is probably not worth routine use.
Abstract: The effect of outliers on reaction time analyses is evaluated. The first section assesses the power of different methods of minimizing the effect of outliers on analysis of variance (ANOVA) and makes recommendations about the use of transformations and cutoffs. The second section examines the effect of outliers and cutoffs on different measures of location, spread, and shape and concludes using quantitative examples that robust measures are much less affected by outliers and cutoffs than measures based on moments. The third section examines fitting explicit distribution functions as a way of recovering means and standard deviations and concludes that unless fitting the distribution function is used as a model of distribution shape, the method is probably not worth routine use.

1,920 citations


Journal ArticleDOI
TL;DR: An iterative outlier detection and adjustment procedure to obtain joint estimates of model parameters and outlier effects and the issues of spurious and masking effects are discussed.
Abstract: Time series data are often subject to uncontrolled or unexpected interventions, from which various types of outlying observations are produced. Outliers in time series, depending on their nature, may have a moderate to significant impact on the effectiveness of the Standard methodology for time series analysis with respect to model identification, estimation, and forecasting. In this article we use an iterative outlier detection and adjustment procedure to obtain joint estimates of model parameters and outlier effects. Four types of outliers are considered, and the issues of spurious and masking effects are discussed. The major differences between this procedure and those proposed in earlier literature include (a) the types and effects of outliers are obtained based on less contaminated estimates of model parameters, (b) the outlier effects are estimated simultaneously using multiple regression, and (c) the model parameters and the outlier effects are estimated jointly. The sampling behavior of the test s...

717 citations


Journal ArticleDOI
TL;DR: This article defines outliers in terms of their position relative to the model for the good observations, in a sense derived from Donoho and Huber.
Abstract: One approach to identifying outliers is to assume that the outliers have a different distribution from the remaining observations. In this article we define outliers in terms of their position relative to the model for the good observations. The outlier identification problem is then the problem of identifying those observations that lie in a so-called outlier region. Methods based on robust statistics and outward testing are shown to have the highest possible breakdown points in a sense derived from Donoho and Huber. But a more detailed analysis shows that methods based on robust statistics perform better with respect to worst-case behavior. A concrete outlier identifier based on a suggestion of Hampel is given.

627 citations


Proceedings ArticleDOI
11 May 1993
TL;DR: The authors consider the problem of robustly estimating optical flow from a pair of images using a new framework based on robust estimation which addresses violations of the brightness constancy and spatial smoothness assumptions and presents a graduated non-convexity algorithm for recovering optical flow and motion discontinuities.
Abstract: The authors consider the problem of robustly estimating optical flow from a pair of images using a new framework based on robust estimation which addresses violations of the brightness constancy and spatial smoothness assumptions. They also show the relationship between the robust estimation framework and line-process approaches for coping with spatial discontinuities. In doing so, the notion of a line process is generalized to that of an outlier process that can account for violations in both the brightness and smoothness assumptions. A graduated non-convexity algorithm is presented for recovering optical flow and motion discontinuities. The performance of the robust formulation is demonstrated on both synthetic data and natural images. >

522 citations


Journal ArticleDOI
01 Sep 1993-Ecology
TL;DR: This paper attempts to introduce some distribution-free and robust techniques to ecologists and to offer a critical appraisal of the potential advantages and drawbacks of these methods.
Abstract: After making a case for the prevalence of nonnormality, this paper attempts to introduce some distribution-free and robust techniques to ecologists and to offer a critical appraisal of the potential advantages and drawbacks of these methods. The techniques presented fall into two distinct categories, methods based on ranks and "computer-inten- sive" techniques. Distribution-free rank tests have features that can be recommended. They free the practitioner from concern about the underlying distribution and are very robust to outliers. If the distribution underlying the observations is other than normal, rank tests tend to be more efficient than their parametric counterparts. The absence, in computing packages, of rank procedures for complex designs may, however, severely limit their use for ecological data. An entire body of novel distribution-free methods has been developed in parallel with the increasing capacities of today's computers to process large quantities of data. These techniques either reshuffle or resample a data set (i.e., sample with replacement) in order to perform their analyses. The former we shall refer to as "permutation" or "randomiza- tion" methods and the latter as "bootstrap" techniques. These computer-intensive methods provide new alternatives for the problem of a small and/or unbalanced data set, and they may be the solution for parameter estimation when the sampling distribution cannot be derived analytically. Caution must be exercised in the interpretation of these estimates because confidence limits may be too small.

462 citations


Journal ArticleDOI
TL;DR: In this paper, a statistical methodology for identifying outliers in production data with multiple inputs and outputs used in deterministic nonparametric frontier models is presented, which is useful in identifying observations that may contain some form of measurement error and thus merit closer scrutiny.
Abstract: This article provides a statistical methodology for identifying outliers in production data with multiple inputs and outputs used in deterministic nonparametric frontier models. The methodology is useful in identifying observations that may contain some form of measurement error and thus merit closer scrutiny. When data checking is costly, the methodology may be used to rank observations in terms of their dissimilarity to other observations in the data, suggesting a priority for further inspection of the data.

375 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce two test procedures for the detection of multiple outliers that appear to be less sensitive to the observations they are supposed to identify, and compare them with various existing methods.
Abstract: We consider the problem of identifying and testing multiple outliers in linear models. The available outlier identification methods often do not succeed in detecting multiple outliers because they are affected by the observations they are supposed to identify. We introduce two test procedures for the detection of multiple outliers that appear to be less sensitive to this problem. Both procedures attempt to separate the data into a set of “clean” data points and a set of points that contain the potential outliers. The potential outliers are then tested to see how extreme they are relative to the clean subset, using an appropriately scaled version of the prediction error. The procedures are illustrated and compared to various existing methods, using several data sets known to contain multiple outliers. Also, the performances of both procedures are investigated by a Monte Carlo study. The data sets and the Monte Carlo indicate that both procedures are effective in the detection of multiple outliers ...

350 citations


Journal ArticleDOI
TL;DR: The issues of forecasting when outliers occur near or at the forecast origin are investigated and a strategy which first estimates the model parameters and outlier effects using the procedure of Chen and Liu (1993) to reduce the bias in the parameter estimates, and then uses a lower critical value to detect outliers near the forecastorigin in the forecasting stage is proposed.
Abstract: Time-series data are often contaminated with outliers due to the influence of unusual and non-repetitive events. Forecast accuracy in such situations is reduced due to (1) a carry-over effect of the outlier on the point forecast and (2) a bias in the estimates of model parameters. Hillmer (1984) and Ledolter (1989) studied the effect of additive outliers on forecasts. It was found that forecast intervals are quite sensitive to additive outliers, but that point forecasts are largely unaffected unless the outlier occurs near the forecast origin. In such a situation the carry-over effect of the outlier can be quite substantial. In this study, we investigate the issues of forecasting when outliers occur near or at the forecast origin. We propose a strategy which first estimates the model parameters and outlier effects using the procedure of Chen and Liu (1993) to reduce the bias in the parameter estimates, and then uses a lower critical value to detect outliers near the forecast origin in the forecasting stage. One aspect of this study is on the carry-over effects of outliers on forecasts. Four types of outliers are considered: innovational outlier, additive outlier, temporary change, and level shift. The effects due to a misidentification of an outlier type are examined. The performance of the outlier detection procedure is studied for cases where outliers are near the end of the series. In such cases, we demonstrate that statistical procedures may not be able to effectively determine the outlier types due to insufficient information. Some strategies are recommended to reduce potential difficulties caused by incorrectly detected outlier types. These findings may serve as a justification for forecasting in conjunction with judgment. Two real examples are employed to illustrate the issues discussed.

172 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate robustness in the logistic regression model and show that there are other versions of robust-resistant estimates which have bias often approximately the same as and sometimes even less than the traditional logistic estimate; these estimates belong to the Mallows class.
Abstract: SUMMARY We investigate robustness in the logistic regression model. Copas has studied two forms of robust estimator: a robust-resistant estimate of Pregibon and an estimate based on a misclassification model. He concluded that robust-resistant estimates are much more biased in small samples than the usual logistic estimate is and recommends a bias-corrected version of the misclassification estimate. We show that there are other versions of robust-resistant estimates which have bias often approximately the same as and sometimes even less than the logistic estimate; these estimates belong to the Mallows class. In addition, the corrected misclassification estimate is inconsistent at the logistic model; we develop a simple consistent modification. The modified estimate is a member of the Mallows class but, unlike most robust estimates, it has an interpretable tuning constant. The results are illustrated on data sets featuring different kinds of outliers.

171 citations


Journal ArticleDOI
TL;DR: A statistical method for automatic quality control of measurement data with distributions close to Gaussian is presented and, using the best one of these, the method is tested using artificial and real turbulence data.
Abstract: A statistical method for automatic quality control of measurement data with distributions close to Gaussian is presented. For each data point a prediction is made, based on the mean, variance and point-to-point correlation of the time series. The predicted value is compared with the actual value and if the difference between the two is 'large' then that data point is either marked as an outlier or replaced by the forecast value. Four different prediction methods are tested and, using the best one of these, the method is tested using artificial and real turbulence data.

164 citations


Journal ArticleDOI
TL;DR: The difficulty that traditional outlier detection methods, such as that of Tsay, have in identifying level shifts in time series is demonstrated and a simple modification to Tsay's procedure is proposed that improves the ability to correctly identify level shifts.
Abstract: This article demonstrates the difficulty that traditional outlier detection methods, such as that of Tsay, have in identifying level shifts in time series. Initializing the outlier/level-shift search with an estimated autoregressive moving average model lowers the power of the level-shift detection statistics. Furthermore, the rule employed by these methods for distinguishing between level shifts and innovation outliers does not work well in the presence of level shifts. A simple modification to Tsay's procedure is proposed that improves the ability to correctly identify level shifts. This modification is relatively easy to implement and appears to be quite effective in practice.

Journal ArticleDOI
TL;DR: The estimation and detection of outliers in a time series generated by a Gaussian autoregressive moving average process is considered and it is shown that the estimation of additive outliers is directly related toThe estimation of missing or deleted observations.
Abstract: The estimation and detection of outliers in a time series generated by a Gaussian autoregressive moving average process is considered. It is shown that the estimation of additive outliers is directly related to the estimation of missing or deleted observations. A recursive procedure for computing the estimates is given. Likelihood ratio and score criteria for detecting additive outliers are examined and are shown to be closely related to the leave-k-out diagnostics studied by Bruce and Martin. The procedures are contrasted with those appropriate for innovational outliers

Journal ArticleDOI
01 Jan 1993-Analyst
TL;DR: This review summarizes critically the approaches available to the treatment of suspect outlying results in sets of experimental measurements, including the use of parametric methods such as the Dixon test and the application of robust statistical methods, which down-weight the importance of outliers.
Abstract: This review summarizes critically the approaches available to the treatment of suspect outlying results in sets of experimental measurements. It covers the use of parametric methods such as the Dixon test (with comments on the problems of multiple outliers); the application of non-parametric statistics based on the median to by-pass outlier problems; and the application of robust statistical methods, which down-weight the importance of outliers. The extension of these approaches to outliers occurring in regression problems is also surveyed.

Journal ArticleDOI
TL;DR: In this paper, a stepwise analysis is proposed for confirmation of outliers and leverage points detected using the robust estimation methods, and diagnostic measures are constructed for observations added back to the reduced sample.
Abstract: Identification of multiple outliers and leverage points is difficult because of the masking effect. Recently, Rousseeuw and van Zomeren suggested using high-breakdown robust estimation methods—the least median of squares and minimum volume ellipsoid—for unmasking these observations. These methods tend to declare too many observations as extreme, however. A stepwise analysis is proposed here for confirmation of outliers and leverage points detected using the robust methods. Diagnostic measures are constructed for observations added back to the reduced sample. They are shown graphically. The complementary use of robust and diagnostic methods gives satisfactory results in analyzing two data sets. One data set consists often bad and four good leverage points. Four (or 10, using a different cutoff) extreme observations of the other data set (of size 28) are identified using the robust methods, but the stepwise analysis confirms only one. The limitations of Atkinson's confirmatory approach are discusse...

Journal ArticleDOI
TL;DR: In this article, a robust version of the log likelihood for multivariate normal data is used to construct M-estimators which are resistant to contamination by outliers, and the robust estimators are found using a minimisation routine which retains the flexible parameterisations of the multiivariate normal approach.
Abstract: Summary Quantitative traits measured over pedigrees of individuals may be analysed using maximum likelihood estimation, assuming that the trait has a multivariate normal distribution. This approach is often used in the analysis of mixed linear models. In this paper a robust version of the log likelihood for multivariate normal data is used to construct M-estimators which are resistant to contamination by outliers. The robust estimators are found using a minimisation routine which retains the flexible parameterisations of the multivariate normal approach. Asymptotic properties of the estimators are derived, computation of the estimates and their use in outlier detection tests are discussed, and a small simulation study is conducted.

Journal ArticleDOI
TL;DR: In this article, a Bayesian approach to study the sensitivity of inferences to possible outliers is proposed, which recalculates the marginal posterior distributions of parameters of interest under assumptions of heavy tails.
Abstract: Many recent applications of the two-level hierarchical model (HM) have focused on drawing inferences concerning fixed effects—that is, structural parameters in the Level 2 model that capture the way Level 1 parameters (e.g., children’s rates of cognitive growth, within-school regression coefficients) vary as a function of Level 2 characteristics (e.g., children’s home environments and educational experiences; school policies, practices, and compositional characteristics). Under standard assumptions of normality in the HM, point estimates and intervals for fixed effects may be sensitive to outlying Level 2 units (e.g., a child whose rate of cognitive growth is unusually slow or rapid, a school at which students achieve at an unusually high level given their background characteristics, etc.). A Bayesian approach to studying the sensitivity of inferences to possible outliers involves recalculating the marginal posterior distributions of parameters of interest under assumptions of heavy tails, which has the e...

Journal ArticleDOI
TL;DR: In this paper, the stalactite plot provides a cogent summary of suspected outliers as the subset size increases and the dependence on subset size can be virtually removed by a simulation-based normalization.
Abstract: Detection of multiple outliers in multivariate data using Mahalanobis distances requires robust estimates of the means and covariance of the data. We obtain this by sequential construction of an outlier free subset of the data, starting from a small random subset. The stalactite plot provides a cogent summary of suspected outliers as the subset size increases. The dependence on subset size can be virtually removed by a simulation-based normalization. Combined with probability plots and resampling procedures, the stalactite plot, particularly in its normalized form, leads to identification of multivariate outliers, even in the presence of appreciable masking.

Journal ArticleDOI
TL;DR: In this article, the relative performance of adaptive robust estimators (partially adaptive and adaptive procedures) with some common non-adaptive estimators is compared with the performance of the adaptive estimators, including OLS, LAD, and the generalized method of moments estimator.
Abstract: Numerous estimation techniques for regression models have been proposed. These procedures differ in how sample information is used in the estimation procedure. The efficiency of least squares (OLS) estimators implicity assumes normally distributed residuals and is very sensitive to departures from normality, particularly to "outliers" and thick-tailed distributions. Lead absolute deviation (LAD) estimators are less sensitive to outliers and are optimal for laplace random disturbances, but not for normal errors. This paper reports monte carlo comparisons of OLS,LAD, two robust estimators discussed by huber, three partially adaptiveestimators, newey's generalized method of moments estimator, and an adaptive maximum likelihood estimator based on a normal kernal studied by manski. This paper is the first to compare the relative performance of some adaptive robust estimators (partially adaptive and adaptive procedures) with some common nonadaptive robust estimators. The partially adaptive estimators are based ...

Journal ArticleDOI
TL;DR: In this article, a probabilistic algorithm called the "Feasible Set Algorithm" is proposed, which produces only trial values satisfying the necessary condition for the optimum and which provides the exact solution with probability 1 as the number of iterations increases.

Journal ArticleDOI
01 Feb 1993
TL;DR: The theory of robust statistics, which formally addresses these problems, is used in a robust sequential estimator (RSE) of the authors' design and is extended to several well-known maximum-likelihood estimators (M-estimators).
Abstract: Depth maps are frequently analyzed as if the errors are normally, identically, and independently distributed. This noise model does not consider at least two types of anomalies encountered in sampling: a few large deviations in the data (outliers) and a uniformly distributed error component arising from rounding and quantization. The theory of robust statistics, which formally addresses these problems, is used in a robust sequential estimator (RSE) of the authors' design. The RSE assigns different weights to each observation based on maximum-likelihood analysis, assuming that the errors follow a t distribution which represents the outliers more realistically. This concept is extended to several well-known maximum-likelihood estimators (M-estimators). Since most M-estimators do not have a target distribution, the weights are obtained by a simple linearization and then embedded in the same RSE algorithm. Experimental results over a variety of real and synthetic range imagery are presented, and the performance of these estimators is evaluated under different noise conditions. >

Journal ArticleDOI
TL;DR: In this paper, the authors present the state of the art in the computation of robust estimates of multivariate location and shape using combinatorial estimators such as the minimum volume ellipsoid (MVE) and iterative M- and S-estimators.
Abstract: This paper reviews the state of the art in the computation of robust estimates of multivariate location and shape using combinatorial estimators such as the minimum volume ellipsoid (MVE) and iterative M- and S-estimators. We also present new results on the behavior of M- and S-estimators in the presence of different types of outliers, and give the first computational evidence on compound estimators that use the MVE as a starting point for an S-estimator. Problems with too many data points in too many dimensions cannot be handled by any available technology; however, the methods presented in this paper substantially extend the size of problem that can be successfully handled.

Journal ArticleDOI
TL;DR: In this article, the quality of the approximation of the elemental set algorithm for the least median of squares, least trimmed squares, and ordinary least squares criterion is studied. But the results are limited to the case of high breakdown regression and multivariate location/scale estimation.
Abstract: The elemental set algorithm involves performing many fits to a data set, each fit made to a subsample of size just large enough to estimate the parameters in the model. Elemental sets have been proposed as a computational device to approximate estimators in the areas of high breakdown regression and multivariate location/scale estimation, where exact optimization of the criterion function is computationally intractable. Although elemental set algorithms are used widely and for a variety of problems, the quality of the approximation they give has not been studied. This article shows that they provide excellent approximations for the least median of squares, least trimmed squares, and ordinary least squares criteria. It is suggested that the approach likely will be equally effective in the other problem areas in which exact optimization of a criterion is difficult or impossible.

Journal ArticleDOI
Yu-Long Xie1, Ji-Hong Wang1, Yi-Zeng Liang1, Lixian Sun1, Xin-Hua Song1, Ru-Qin Yu1 
TL;DR: In this paper, projection pursuit (PP) is used to carry out principal component analysis with a criterion which is more robust than the variance, and generalized simulated annealing (GSA) is introduced as an optimization procedure in the process of PP calculation to guarantee the global optimum.
Abstract: Principal component analysis (PCA) is a widely used technique in chemometrics. The classical PCA method is, unfortunately, non-robust, since the variance is adopted as the objective function. In this paper, projection pursuit (PP) is used to carry out PCA with a criterion which is more robust than the variance. In addition, the generalized simulated annealing (GSA) algorithm is introduced as an optimization procedure in the process of PP calculation to guarantee the global optimum. The results for simulated data sets show that PCA via PP is resistant to the deviation of the error distribution from the normal one. The method is especially recommended for use in cases with possible outlier(s) existing in the data.

Journal ArticleDOI
TL;DR: A new type of multiple time period outlier is used, called a reallocation, to be a block of unusually high and low values occurring in such a way that the sum of the observations within the block is the same as might have been expected for an undisturbed series.
Abstract: Time series data often contain outliers which have an effect on parameter estimates and forecasts. Outliers in isolation have been well studied. However, in business and economic data, it is common to see unusually low observations followed by unusually high observations or vice versa. We model this behaviour by using a new type of multiple time period outlier which we call a heallocation, defined to be a block of unusually high and low values occurring in such a way that the sum of the observations within the block is the same as might have been expected for an undisturbed series. We derive tests for detecting reallocation outliers and distinguishing them from additive outliers. We show the effect on forecasts and forecast intervals of ignoring reallocation outliers

Journal ArticleDOI
TL;DR: In this article, a simple modification of the procedure which yields statistics having the same asymptotic distributions as stated in Perron (1989) was proposed. But the analysis is restricted to the case where the breakpoint is unknown.
Abstract: This note discusses tests for a unit root allowing the possibility of a onetime change in the intercept and/or the slope of the trend function in the additive outlier model considered in Perron (1989). We discuss and correct an error in the stated asymptotic distributions of the tests in this case. We propose a simple modification of the procedure which yields statistics having the same asymptotic distributions as stated in Perron (1989). We also discuss the adequacy of the asymptotic approximations and various extensions to the case where the breakpoint is unknown with corresponding asymptotic critical values.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the use of two posterior probability methods to deal with outliers in linear models and found that using the posterior probabilities computed from the posterior distributions of actual realized residuals is more effective than using probabilities calculated from the actual observed residuals.
Abstract: SUMMARY This paper compares the use of two posterior probability methods to deal with outliers in linear models. We show that putting together diagnostics that come from the mean-shift and variance-shift models yields a procedure that seems to be more effective than the use of probabilities computed from the posterior distributions of actual realized residuals. The relation of the suggested procedure to the use of a certain predictive distribution for diagnostics is derived. Two main procedures for outlier detection have emerged when using the Bayesian approach. The first of these confines itself to postulating a null model for the generation of the data and then seeks identification methods for outliers with no alternative model to the null entertained. Examples of work in the category are (i) use of the predictive distribution for detection, (ii) using the posterior probabilities of various unobserved perturbations, and (iii) looking at the change in a posterior of interest when some observations are deleted. These methods will be discussed in ? 3 of this paper, after the basic model is set out in ? 2. The second procedure takes into account an alternative model for the generation of a subset of the sample. As examples, various authors have proposed and utilized the mean-shift model and the variance-inflation model. These methods are discussed in ? 4 of this paper. We show in this paper that these procedures can be classified as those which identify outliers by looking at some functions of the predictive density ordinate, and those that look at the posterior probabilities of the unobserved residuals. The first seems to be effective in overcoming the problem of high leverage outliers, in contrast to the latter, where outliers are masked by high leverage. These results are illustrated by examples in ? 5.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A nonlinear adaptive noise filter for filtering the nonstationary signals from image sequences is presented and proposes to use a robust recursive estimator for those statistics which is based on order statistics.
Abstract: A nonlinear adaptive noise filter for filtering the nonstationary signals from image sequences is presented. The filter uses estimates of the local statistics to adapt instantaneously. The authors propose to use a robust recursive estimator for those statistics which is based on order statistics. Prior to filtering, motion is estimated from the sequence and compensated for. For the estimation, a recursive block-matcher is used with a match criterion based on higher order statistics. By weighing out outlier data, this estimator provides robust estimates. The overall combination is able adequately to remove severe noise while maintaining sharp results. The computational complexity makes the method suitable for off-line processing only. >

Journal ArticleDOI
TL;DR: In this article, the authors derived the approximate expected values of the estimates of the model coefficients and of the innovation variances in the presence of a single additive outlier, provided the size of the series is not too small.

Proceedings ArticleDOI
18 Oct 1993
TL;DR: Two algorithms for identifying subtle outlier errors in a variety of multibeam swath sounding systems are developed, based on robust estimation of autoregressive (AR) model parameters and energy minimization techniques.
Abstract: For a variety of reasons, multibeam swath sounding systems produce errors that can seriously corrupt navigational charts. To address this problem, the authors have developed two algorithms for identifying subtle outlier errors in a variety of multibeam systems. The first algorithm treats the swath as a sequence of images. The algorithm is based on robust estimation of autoregressive (AR) model parameters. The second algorithm is based on energy minimization techniques. The data are represented by a weak-membrane or thin-plate model, and a global optimization procedure is used to find a stable surface shape. Both of these algorithms have undergone extensive testing at bathymetric processing centers to assess performance. The algorithms were found to have a probability of detection high enough to be useful and a false-alarm rate that does not significantly degrade the data quality. The resulting software is currently being used both at processing centers and at sea as an aid to bathymetric data processors. >

Journal ArticleDOI
TL;DR: This article extended Yuen's solution to repeated measures and randomized block designs and showed that the new procedure compares well to the usual F test using the Huynh-Feldt correction of the degrees of freedom.
Abstract: A well-known result is that the value of the standard error of the sample mean is sensitive to heavy tails (Tukey, 1960). In particular, moving slightly away from a normal distribution toward a heavier tailed distribution can increase the standard error by a considerable amount. As a result, methods for comparing means can have relatively low power. This problem is particularly serious in psychology because recent investigations indicate that the distributions associated with psychometric measures often have outliers and very heavy tails (Micceri, 1989; Wilcox, 1990a). One approach to this problem is to compare trimmed means instead. Yuen (1974) describes a method for comparing the trimmed means of two independent groups, and she demonstrated that her procedure can have considerably more power than Welch's test for means. This paper extends Yuen's solution to repeated measures and randomized block designs. Simulations indicate that the new procedure compares well to the usual F test using the Huynh-Feldt correction of the degrees of freedom.