scispace - formally typeset
Search or ask a question

Showing papers by "T. W. Anderson published in 1973"



01 Jan 1973
TL;DR: In this article, an asymptotic expansion of the distribution function of the k-class estimate is given in terms of an Edgeworth or Gram-Charlier series (of which the leading term is the normal distribution).
Abstract: The limited information maximum likelihood and two-stage least squares estimates have the same asymptotic normal distribution; the ordinary least squares estimate has another asymptotic normal distribution. This paper considers more accurate approximations to the distributions of the so-called "k-class" estimates. An asymptotic expansion of the distribution of such an estimate is given in terms of an Edgeworth or Gram-Charlier series (of which the leading term is the normal distribution). The development also permits expression of the exact distribution in several forms. The distributions of the two-stage least squares and ordinary least squares estimates are transformed to doubly-noncentral F distributions. Numerical comparisons are made between the approximate distributions and exact distributions calculated by the second author. SEVERAL METHODS HAVE been proposed for estimating the coefficients of a single equation in a complete system of simultaneous structural equations, including limited information maximum likelihood (Anderson and Rubin [1]), two-stage least squares (Basmann [3] and Theil [9]), and ordinary least squares. Under appropriate general conditions the first two methods yield consistent estimates; the two sets of estimates normalized by the square root of the sample size have the same limiting joint normal distributions (Anderson and Rubin [2]). In special cases the exact distributions of the estimates have been obtained. In particular, when the predetermined variables are exogenous, two endogenous variables occur in the relevant equation, and the coefficient of one endogenous variable is specified to be one, the exact distribution of the estimate of the coefficient of one endogenous variable has been obtained by Richardson [7] and Sawa [8] in the case of twostage least squares and by Mariano and Sawa [6] in the case of limited information maximum likelihood. The exact distributions involve multiple infinite series and are hard to interpret, but Sawa has graphed some of the densities of the two-stage least squares estimate on the basis of calculations from an infinite series expression. The main result of this paper is to obtain an asymptotic expansion of the distribution function of the so-called k-class estimate (which includes the twostage least squares estimate and the ordinary least squares estimate) in the case of two endogenous variables. The density of the approximate distribution is a normal density multiplied by a polynomial. The first correction term to the normal distribution involves a cubic divided by the square root of the sample size.

73 citations


Book ChapterDOI
01 Jan 1973
TL;DR: In this article, an asymptotic evaluation of the probabilities of misclassification by linear discriminant functions is presented, where the parameters are unknown and there is available a sample from each population, and the common covariance matrix of the populations is estimated using deviations from the respective means in the two samples.
Abstract: Publisher Summary This chapter presents an asymptotic evaluation of the probabilities of misclassification by linear discriminant functions. The problem of classifying an observation into one of two multivariate normal populations with a common covariance matrix might be called the classical classification problem. Fisher's linear discriminant function serves as a criterion when samples are used to estimate the parameters of the two distributions. When the parameters are unknown and there is available a sample from each population, the mean of each population is estimated by the mean of the respective sample and the common covariance matrix of the populations is estimated by using deviations from the respective means in the two samples. The classification function W, proposed by T. W. Anderson, is obtained by replacing the parameters in the linear function resulting from the Neyman–Pearson Fundamental Lemma by the estimates; the substitution for parameters has been called plugging-in estimates. The Studentized W statistic is W less the estimate of its limiting mean divided by the estimate of its limiting standard deviation. If a statistician wants to set his cut-off point to achieve a specified probability of misclassification, he can use this Studentized W.

25 citations