scispace - formally typeset
Search or ask a question
Author

T. W. Anderson

Bio: T. W. Anderson is an academic researcher from Stanford University. The author has contributed to research in topics: Estimator & Autoregressive model. The author has an hindex of 52, co-authored 179 publications receiving 42299 citations. Previous affiliations of T. W. Anderson include Columbia University & Carnegie Mellon University.


Papers
More filters
Journal ArticleDOI
TL;DR: The asymptotic distribution of the maximum likelihood estimator derived under normality is shown to be valid generally if the different latent variables are independent (not just uncorrelated) and tests of the covariance structure are also asymPTotically robust.

82 citations

Journal ArticleDOI
TL;DR: In this article, the least square estimate of the parameter matrix in the model of unobservable disturbances was shown to converge to the zero matrix with probability one under certain conditions on the behavior of $x_t$ and $u_t$.
Abstract: The least squares estimate of the parameter matrix $\mathbf{B}$ in the model $\mathbf{y}_t = \mathbf{B'x}_t + \mathbf{u}_t$, where $\mathbf{u}_t$ is an $m$-component vector of unobservable disturbances and $x_t$ is a $p$-component vector, converges to $\mathbf{B}$ with probability one under certain conditions on the behavior of $x_t$ and $\mathbf{u}_t$. When $\mathbf{x}_t$ is stochastic and the conditional expectation of $\mathbf{u}_t$ given $\mathbf{x}_s$ for $s \leqslant t$ and $\mathbf{u}_t$ for $s < t$ is zero, then the least squares estimates are strongly consistent if the inverse of $\mathbf{A}_T = \sigma^T_{t=1} \mathbf{x}_t\mathbf{x}'_t$, where $T$ is the sample size, converges to the zero matrix and if the ratio of the largest to the smallest characteristic root of $\mathbf{A}_T$ is bounded with probability one.

78 citations

01 Jan 1973
TL;DR: In this article, an asymptotic expansion of the distribution function of the k-class estimate is given in terms of an Edgeworth or Gram-Charlier series (of which the leading term is the normal distribution).
Abstract: The limited information maximum likelihood and two-stage least squares estimates have the same asymptotic normal distribution; the ordinary least squares estimate has another asymptotic normal distribution. This paper considers more accurate approximations to the distributions of the so-called "k-class" estimates. An asymptotic expansion of the distribution of such an estimate is given in terms of an Edgeworth or Gram-Charlier series (of which the leading term is the normal distribution). The development also permits expression of the exact distribution in several forms. The distributions of the two-stage least squares and ordinary least squares estimates are transformed to doubly-noncentral F distributions. Numerical comparisons are made between the approximate distributions and exact distributions calculated by the second author. SEVERAL METHODS HAVE been proposed for estimating the coefficients of a single equation in a complete system of simultaneous structural equations, including limited information maximum likelihood (Anderson and Rubin [1]), two-stage least squares (Basmann [3] and Theil [9]), and ordinary least squares. Under appropriate general conditions the first two methods yield consistent estimates; the two sets of estimates normalized by the square root of the sample size have the same limiting joint normal distributions (Anderson and Rubin [2]). In special cases the exact distributions of the estimates have been obtained. In particular, when the predetermined variables are exogenous, two endogenous variables occur in the relevant equation, and the coefficient of one endogenous variable is specified to be one, the exact distribution of the estimate of the coefficient of one endogenous variable has been obtained by Richardson [7] and Sawa [8] in the case of twostage least squares and by Mariano and Sawa [6] in the case of limited information maximum likelihood. The exact distributions involve multiple infinite series and are hard to interpret, but Sawa has graphed some of the densities of the two-stage least squares estimate on the basis of calculations from an infinite series expression. The main result of this paper is to obtain an asymptotic expansion of the distribution function of the so-called k-class estimate (which includes the twostage least squares estimate and the ordinary least squares estimate) in the case of two endogenous variables. The density of the approximate distribution is a normal density multiplied by a polynomial. The first correction term to the normal distribution involves a cubic divided by the square root of the sample size.

73 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the adequacy of the conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice were examined, and the results suggest that, for the ML method, a cutoff value close to.95 for TLI, BL89, CFI, RNI, and G...
Abstract: This article examines the adequacy of the “rules of thumb” conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice. Using a 2‐index presentation strategy, which includes using the maximum likelihood (ML)‐based standardized root mean squared residual (SRMR) and supplementing it with either Tucker‐Lewis Index (TLI), Bollen's (1989) Fit Index (BL89), Relative Noncentrality Index (RNI), Comparative Fit Index (CFI), Gamma Hat, McDonald's Centrality Index (Mc), or root mean squared error of approximation (RMSEA), various combinations of cutoff values from selected ranges of cutoff criteria for the ML‐based SRMR and a given supplemental fit index were used to calculate rejection rates for various types of true‐population and misspecified models; that is, models with misspecified factor covariance(s) and models with misspecified factor loading(s). The results suggest that, for the ML method, a cutoff value close to .95 for TLI, BL89, CFI, RNI, and G...

76,383 citations

Journal ArticleDOI
TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Abstract: The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.

47,133 citations

Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Journal ArticleDOI
TL;DR: In this article, the generalized method of moments (GMM) estimator optimally exploits all the linear moment restrictions that follow from the assumption of no serial correlation in the errors, in an equation which contains individual effects, lagged dependent variables and no strictly exogenous variables.
Abstract: This paper presents specification tests that are applicable after estimating a dynamic model from panel data by the generalized method of moments (GMM), and studies the practical performance of these procedures using both generated and real data. Our GMM estimator optimally exploits all the linear moment restrictions that follow from the assumption of no serial correlation in the errors, in an equation which contains individual effects, lagged dependent variables and no strictly exogenous variables. We propose a test of serial correlation based on the GMM residuals and compare this with Sargan tests of over-identifying restrictions and Hausman specification tests.

26,580 citations

Book
B. J. Winer1
01 Jan 1962
TL;DR: In this article, the authors introduce the principles of estimation and inference: means and variance, means and variations, and means and variance of estimators and inferors, and the analysis of factorial experiments having repeated measures on the same element.
Abstract: CHAPTER 1: Introduction to Design CHAPTER 2: Principles of Estimation and Inference: Means and Variance CHAPTER 3: Design and Analysis of Single-Factor Experiments: Completely Randomized Design CHAPTER 4: Single-Factor Experiments Having Repeated Measures on the Same Element CHAPTER 5: Design and Analysis of Factorial Experiments: Completely-Randomized Design CHAPTER 6: Factorial Experiments: Computational Procedures and Numerical Example CHAPTER 7: Multifactor Experiments Having Repeated Measures on the Same Element CHAPTER 8: Factorial Experiments in which Some of the Interactions are Confounded CHAPTER 9: Latin Squares and Related Designs CHAPTER 10: Analysis of Covariance

25,607 citations