scispace - formally typeset
Search or ask a question
Author

T. W. Anderson

Bio: T. W. Anderson is an academic researcher from Stanford University. The author has contributed to research in topics: Estimator & Autoregressive model. The author has an hindex of 52, co-authored 179 publications receiving 42299 citations. Previous affiliations of T. W. Anderson include Columbia University & Carnegie Mellon University.


Papers
More filters
Book
01 Jan 1986
TL;DR: The new statistical analysis of data is presented for the first time in a systematic fashion with real-time consequences for the quantity and quality of individual transactions.
Abstract: The new statistical analysis of data , The new statistical analysis of data , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

136 citations

Journal ArticleDOI
TL;DR: In this paper, the distributions of the Limited Information Maximum Likelihood estimator for the coefficient of one endogenous variable are evaluated numerically and compared with the Two-Stage Least Squares estimator.
Abstract: The distributions of the Limited Information Maximum Likelihood estimator for the coefficient of one endogenous variable are evaluated numerically. Tables are given for enough values of the parameters to cover all cases of interests. Comparisons are made with the Two-Stage Least Squares estimator.

127 citations

Journal ArticleDOI
TL;DR: In this article, the Neyman-Pearson theory is applied to the problem of testing serial correlation in quadratic forms, and certain theorems concerning more general problems of Quadratic Form are developed and later applied to test serial correlation.
Abstract: Several different statistics have been proposed for testing the independence between successive observations from a normal population. In order to choose between the various tests a theory of testing this hypothesis in certain populations is needed. In this paper the problem is studied within the framework of the Neyman-Pearson theory. Certain theorems concerning more general problems of quadratic forms are developed and later applied to the question of testing serial correlation.

124 citations

Journal ArticleDOI
TL;DR: In this article, the problem of determining the appropriate degree of a polynomial in the index, say time, to represent the regression of the observable variable is formulated in terms used in the theory of testing hypotheses and the optimal procedure is to test in sequence whether coefficients are 0, starting with the highest (specified) degree.
Abstract: On the basis of a sample of observations, an investigator wants to determine the appropriate degree of a polynomial in the index, say time, to represent the regression of the observable variable. This multiple decision problem is formulated in terms used in the theory of testing hypotheses. Given the degree of polynomial regression, the probability of deciding a higher degree is specified and does not depend on what the actual polynomial is (expect its degree). Within the class of procedures satisfying these conditions and symmetry (or two-sidedness) conditions, the probabilities of correct decisions are maximized. The optimal procedure is to test in sequence whether coefficients are 0, starting with the highest (specified) degree. The procedure holds for other linear regression functions when the independent variates are ordered. The problem and its solution can be generalized to the multivariate case and to other cases with a certain structure of sufficient statistics.

122 citations

Journal ArticleDOI
TL;DR: In this paper, a more efficient method is obtained by using Householder transformations, based on the product of orthogonal matrices, each of which represents an angle of rotation.
Abstract: In order to generate a random orthogonal matrix distributed according to Haar measure over the orthogonal group it is natural to start with a matrix of normal random variables and then factor it by the singular value decomposition. A more efficient method is obtained by using Householder transformations. We propose another alternative based on the product of ${{n(n - 1)}/2}$ orthogonal matrices, each of which represents an angle of rotation. Some numerical comparisons of alternative methods are made.

116 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the adequacy of the conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice were examined, and the results suggest that, for the ML method, a cutoff value close to.95 for TLI, BL89, CFI, RNI, and G...
Abstract: This article examines the adequacy of the “rules of thumb” conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice. Using a 2‐index presentation strategy, which includes using the maximum likelihood (ML)‐based standardized root mean squared residual (SRMR) and supplementing it with either Tucker‐Lewis Index (TLI), Bollen's (1989) Fit Index (BL89), Relative Noncentrality Index (RNI), Comparative Fit Index (CFI), Gamma Hat, McDonald's Centrality Index (Mc), or root mean squared error of approximation (RMSEA), various combinations of cutoff values from selected ranges of cutoff criteria for the ML‐based SRMR and a given supplemental fit index were used to calculate rejection rates for various types of true‐population and misspecified models; that is, models with misspecified factor covariance(s) and models with misspecified factor loading(s). The results suggest that, for the ML method, a cutoff value close to .95 for TLI, BL89, CFI, RNI, and G...

76,383 citations

Journal ArticleDOI
TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Abstract: The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.

47,133 citations

Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Journal ArticleDOI
TL;DR: In this article, the generalized method of moments (GMM) estimator optimally exploits all the linear moment restrictions that follow from the assumption of no serial correlation in the errors, in an equation which contains individual effects, lagged dependent variables and no strictly exogenous variables.
Abstract: This paper presents specification tests that are applicable after estimating a dynamic model from panel data by the generalized method of moments (GMM), and studies the practical performance of these procedures using both generated and real data. Our GMM estimator optimally exploits all the linear moment restrictions that follow from the assumption of no serial correlation in the errors, in an equation which contains individual effects, lagged dependent variables and no strictly exogenous variables. We propose a test of serial correlation based on the GMM residuals and compare this with Sargan tests of over-identifying restrictions and Hausman specification tests.

26,580 citations

Book
B. J. Winer1
01 Jan 1962
TL;DR: In this article, the authors introduce the principles of estimation and inference: means and variance, means and variations, and means and variance of estimators and inferors, and the analysis of factorial experiments having repeated measures on the same element.
Abstract: CHAPTER 1: Introduction to Design CHAPTER 2: Principles of Estimation and Inference: Means and Variance CHAPTER 3: Design and Analysis of Single-Factor Experiments: Completely Randomized Design CHAPTER 4: Single-Factor Experiments Having Repeated Measures on the Same Element CHAPTER 5: Design and Analysis of Factorial Experiments: Completely-Randomized Design CHAPTER 6: Factorial Experiments: Computational Procedures and Numerical Example CHAPTER 7: Multifactor Experiments Having Repeated Measures on the Same Element CHAPTER 8: Factorial Experiments in which Some of the Interactions are Confounded CHAPTER 9: Latin Squares and Related Designs CHAPTER 10: Analysis of Covariance

25,607 citations