scispace - formally typeset
Search or ask a question

Showing papers on "Heteroscedasticity published in 1970"


Book
01 Jan 1970
TL;DR: In this paper, the authors present a systematic presentation of statistical methods used for the analysis of economic data and the properties of various procedures are studied within the framework of theoretical stochastic models, and their relevance for inference on the economic phenomena is discussed at length.
Abstract: This now classic volume aims at a systematic presentation of the statistical methods used for the analysis of economic data. The properties of the various procedures are studied within the framework of theoretical stochastic models. Their relevance for inference on the economic phenomena is discussed at length. This third edition has been updated in many respects. Chapter 8 (Regression in Various Contexts) has been rewritten and now provides a full discussion of estimation in the linear models with a partially unknown covariance matrix, which introduces a systematic treatment of heteroscedasticity, random coefficients and composite errors. A new chapter has been added on simultaneous equation models that are non-linear with respect to the endogenous variables. The reader will also find new sections on shrunken estimators, on the choice of a model, on specification and estimation for distributed lag equations.

709 citations


Journal ArticleDOI
TL;DR: In this article, a linear regression model with independent, homoscedastic and normally distributed errors is analyzed in a stepwise manner to produce calculated residuals having this same property, where the residual is calculated as the deviation of the nth observation from its predicted value based on a least squares fit to only the first n observations.
Abstract: Regression models which specify independent, homoscedastic and normally distributed errors may be analyzed in a stepwise manner to produce calculated residuals having this same property. If the nth residual is calculated as the deviation of the nth observation from its predicted value based on a least squares fit to only the first n observations then the resulting sequence of residuals, appropriately normalized, are not only mutually independent and homoscedastic but are also independent of all of the calculated regression functions. If error variance is a monotonic function of the mean then, under certain regularity conditions, the calculated stepwise residuals are likewise monotonically heteroscedastic. Simple linear regression with equally spaced values of the independent variable constitutes one such regular case, and a Monte Carlo study of the “peak-test” of homoscedasticity in this instance shows that for small samples the stepwise residuals are substantially more sensitive to monotonic het...

52 citations


Journal ArticleDOI
TL;DR: In this paper, the L1 norm is employed in two new estimating techniques, direct least absolute (DLA) and two-stage least absolute(TSLA), and these two are compared to direct least squares (DLS) and twostage least square (TSLS), and four experiments testing the normal distribution case, a multicollinearity problem, a heteroskedastic variance problem.
Abstract: In this paper a distribution sampling study consisting of four major experiments is described. The L1 norm is employed in two new estimating techniques, direct least absolute (DLA) and two-stage least absolute (TSLA), and these two are compared to direct least squares (DLS) and two-stage least squares (TSLS). Four experiments testing the normal distribution case, a multicollinearity problem, a heteroskedastic variance problem, and a misspecified model were conducted. Two small sample sizes were used in each experiment, one with N = 20 and one with N = 10. In addition, conditional predictions were made using the reduced form of the four estimators plus two direct methods, least squares no restrictions (LSNR) and another new method known as least absolute no restrictions (LANR). The general conclusion was that the L1 norm estimators should prove equal to or superior to the L2 norm estimators for models using a structure similar to the overidentified one specified for this study, with randomly distributed error terms and very small sample sizes. BEGINNING WITH the method developed by Haavelmo [11] for solving the problem of single equation bias, econometricians have devoted considerable effort to developing additional methods for estimating the structural parameters of simultaneous equation models [2,12,20,24]. While it has been fairly easy to develop the asymptotic properties of these estimators, a distinguishing characteristic of econometric models is that they are invariably based upon small samples of data and thus, the asymptotic properties of the various estimators are not

32 citations


Journal ArticleDOI
TL;DR: Examination of the residuals, such as recommended above, is a far better procedure, as it can provide an objective justification for the use of the chosen model and is of importance for the researcher not only to choose his model objectively, but also to submit to his readers ample justification of his choice.
Abstract: A controversy has developed recently concerning the problem of fitting a regression line to data which are presumed to be related by the power function, Y = aXb. The model commonly used as the relationship between avian standard metabolic rate (Y) and body weight (X) is such a situation. Most authors routinely transform the data logarithmically and then perform a least squares linear regression fit to the equation log Y log a + b log X; but the untransformed data may instead be fit directly by least squares (Zar, BioScience 18:1118-1120, 1968). As pointed out in the latter paper, as well as in Lasiewski and Dawson (Condor 71:335-336, 1969), these two methods are not statistically equivalent. Whereas the first method may be appropriate for certain sets of data, the second may be warranted for others, and the researcher is confronted with the need to choose objectively between them. The crux of the matter, as stressed by Lasiewski and Dawson and by Zar, is whether the use of a particular model results in residuals whose variability is constant over the range of measurements. A residual is the difference between the observed Y and the Y predicted by the model, and constant variability of residuals is termed homoscedasticity. Glejser (J. Amer. Statist. Assoc. 64:316-323, 1969) has proposed an objective, easy to perform test as a criterion for the determination of heteroscedasticity (i.e., the lack of constant variability among the residuals). He suggests an examination of the linear regression, E = a' + b'X, where E is the absolute value of the residuals and X is the independent variable in the model being examined. In this regression, using the residuals from the untransformed fit, a slope (b') significantly greater than zero would indicate that the variability of the residuals increases with X, and the logarithmic transformation may be employed to try to achieve homoscedasticity. (A slope not significantly different from zero should rule out the employment of any transformation.) If, after the fitting of the logarithmic equation, the application of Glejser's regression yields a b' value not significantly different from zero, then the transformation might be deemed justifiable. For the passerine data of Lasiewski and Dawson (Condor 69:13-23, 1967), such a regression of residuals applied after fitting the nontransformed power function yields a b' significantly greater than zero. The regression applied after fitting the logarithmic model (in this case E is the absolute difference between the log Y observed and the log Y predicted, and log X is used in place of X) results in a positive b' value not significantly different from zero. Thus, contrary to my previous suggestions (BioScience 18:1118-1120, 1968; Comp. Biochem. Physiol. 29: 227-234, 1969), the logarithmic model is the more appropriate for this set of data. However, the fact that these data are better analyzed using the logarithmic transformation should not lead to a general rejection of the nontransformed model. Lasiewski and Dawson (Condor 71:335-336, 1969) suggest that graphical examination might aid subjectively in the choice of model. However, examination of the residuals, such as recommended above, is a far better procedure, as it can provide an objective justification for the use of the chosen model. It is of importance for the researcher not only to choose his model objectively, but also to submit to his readers ample justification of his choice.

4 citations