scispace - formally typeset
Search or ask a question
Author

T. W. Anderson

Bio: T. W. Anderson is an academic researcher from Stanford University. The author has contributed to research in topics: Estimator & Autoregressive model. The author has an hindex of 52, co-authored 179 publications receiving 42299 citations. Previous affiliations of T. W. Anderson include Columbia University & Carnegie Mellon University.


Papers
More filters
Posted Content
TL;DR: In this article, the authors developed the likelihood ratio criterion (LRC) for testing the coefficients of a structural equation in a system of simultaneous equations in econometrics, which can be extended to the linear functional relationships (or the errors-in-variables) model, the reduced rank regression and the cointegration models.
Abstract: We develop the likelihood ratio criterion (LRC) for testing the coefficients of a structural equation in a system of simultaneous equations in econometrics. We relate the likelihood ratio criterion to the AR statistic proposed by Anderson and Rubin (1949, 1950), which has been widely known and used in econometrics over the past several decades. The method originally developed by Anderson and Rubin (1949, 1950) can be modified to the situation when there are many (or weak in some sense) instruments which may have some relevance in recent econometrics. The method of LRC can be extended to the linear functional relationships (or the errors-in-variables) model, the reduced rank regression and the cointegration models.

6 citations

Book ChapterDOI
01 Jan 1996
TL;DR: This chapter considers several indices of location and shows how each of them tells us about a central point in the data.
Abstract: After a set of data has been collected, it must be organized and condensed or categorized for purposes of analysis In addition to graphical summaries, numerical indices can be computed that summarize the primary features of the data set One is an indicator of location or central tendency that specifies where the set of measurements is “located” on the number line; it is a single number that designates the center of a set of measurements In this chapter we consider several indices of location and show how each of them tells us about a central point in the data

6 citations

ReportDOI
01 Jun 1989
TL;DR: In this article, Lindeberg-type conditions are used to establish asymptotic normality of sample regression and autoregression coefficients, and they are applied to establish the robustness of a statistical procedure under conditions more general than those under which the procedure is derived.
Abstract: : A statistical procedure is asymptotically robust if its large-sample properties hold under conditions more general than the conditions under which the procedure is derived. The justification of such procedures is often based directly or indirectly on a central limit theorem. In this paper Lindeberg-type conditions are utilized to establish asymptotic normality of sample regression and autoregression coefficients.

5 citations

Journal ArticleDOI
TL;DR: The cointegrated model considered here is a nonstationary vector autoregressive process in which some linear functions are stationary and others are random walks, and the asymptotic distributions of the canonical correlations and the canonical vectors under the assumption that the process is Gaussian are found.
Abstract: The cointegrated model considered here is a nonstationary vector autoregressive process in which some linear functions are stationary and others are random walks. The first difference of the process (the "error-correction form") is stationary. Statistical inference, such as reduced rank regression estimation of the coefficients of the process and tests of hypotheses of dimensionality of the stationary part, involves the canonical correlations between the difference vector and the relevant vector of the past of the process. The asymptotic distributions of the canonical correlations and the canonical vectors under the assumption that the process is Gaussian are found.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the adequacy of the conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice were examined, and the results suggest that, for the ML method, a cutoff value close to.95 for TLI, BL89, CFI, RNI, and G...
Abstract: This article examines the adequacy of the “rules of thumb” conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice. Using a 2‐index presentation strategy, which includes using the maximum likelihood (ML)‐based standardized root mean squared residual (SRMR) and supplementing it with either Tucker‐Lewis Index (TLI), Bollen's (1989) Fit Index (BL89), Relative Noncentrality Index (RNI), Comparative Fit Index (CFI), Gamma Hat, McDonald's Centrality Index (Mc), or root mean squared error of approximation (RMSEA), various combinations of cutoff values from selected ranges of cutoff criteria for the ML‐based SRMR and a given supplemental fit index were used to calculate rejection rates for various types of true‐population and misspecified models; that is, models with misspecified factor covariance(s) and models with misspecified factor loading(s). The results suggest that, for the ML method, a cutoff value close to .95 for TLI, BL89, CFI, RNI, and G...

76,383 citations

Journal ArticleDOI
TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Abstract: The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.

47,133 citations

Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Journal ArticleDOI
TL;DR: In this article, the generalized method of moments (GMM) estimator optimally exploits all the linear moment restrictions that follow from the assumption of no serial correlation in the errors, in an equation which contains individual effects, lagged dependent variables and no strictly exogenous variables.
Abstract: This paper presents specification tests that are applicable after estimating a dynamic model from panel data by the generalized method of moments (GMM), and studies the practical performance of these procedures using both generated and real data. Our GMM estimator optimally exploits all the linear moment restrictions that follow from the assumption of no serial correlation in the errors, in an equation which contains individual effects, lagged dependent variables and no strictly exogenous variables. We propose a test of serial correlation based on the GMM residuals and compare this with Sargan tests of over-identifying restrictions and Hausman specification tests.

26,580 citations

Book
B. J. Winer1
01 Jan 1962
TL;DR: In this article, the authors introduce the principles of estimation and inference: means and variance, means and variations, and means and variance of estimators and inferors, and the analysis of factorial experiments having repeated measures on the same element.
Abstract: CHAPTER 1: Introduction to Design CHAPTER 2: Principles of Estimation and Inference: Means and Variance CHAPTER 3: Design and Analysis of Single-Factor Experiments: Completely Randomized Design CHAPTER 4: Single-Factor Experiments Having Repeated Measures on the Same Element CHAPTER 5: Design and Analysis of Factorial Experiments: Completely-Randomized Design CHAPTER 6: Factorial Experiments: Computational Procedures and Numerical Example CHAPTER 7: Multifactor Experiments Having Repeated Measures on the Same Element CHAPTER 8: Factorial Experiments in which Some of the Interactions are Confounded CHAPTER 9: Latin Squares and Related Designs CHAPTER 10: Analysis of Covariance

25,607 citations