scispace - formally typeset
Search or ask a question

The infusion of matrices into statistics

01 Mar 1999-
About: The article was published on 1999-03-01 and is currently open access. It has received 10 citations till now.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors show that if a is regular, then the ratio of 1.1 (n (1.1) √ n (1) √ 1.
Abstract: $$\\left[ {\\begin{array}{*{20}{c}} {10} \\\\ { - v{{a}^{{ - 1}}}1} \\\\ \\end{array} } \\right]\\left[ {\\begin{array}{*{20}{c}} {au} \\\\ {vb} \\\\ \\end{array} } \\right]\\left[ {\\begin{array}{*{20}{c}} {1 - {{a}^{{ - 1}}}u} \\\\ {01} \\\\ \\end{array} } \\right] = \\left[ {\\begin{array}{*{20}{c}} {a0} \\\\ {0b - v{{a}^{{ - 1}}}u} \\\\ \\end{array} } \\right]$$ (1.1) provided a is regular.

386 citations

Journal ArticleDOI
TL;DR: This book discusses Maximization, Minimization, and Motivation, which is concerned with the optimization of Symmetric Matrices, and its applications in Programming and Mathematical Economics.
Abstract: Foreword Preface to the Second Edition Preface 1. Maximization, Minimization, and Motivation 2. Vectors and Matrices 3. Diagonalization and Canonical Forms for Symmetric Matrices 4. Reduction of General Symmetric Matrices to Diagonal Form 5. Constrained Maxima 6. Functions of Matrices 7. Variational Description of Characteristic Roots 8. Inequalities 9. Dynamic Programming 10. Matrices and Differential Equations 11. Explicit Solutions and Canonical Forms 12. Symmetric Function, Kronecker Products and Circulants 13. Stability Theory 14. Markoff Matrices and Probability Theory 15. Stochastic Matrices 16. Positive Matrices, Perron's Theorem, and Mathematical Economics 17. Control Processes 18. Invariant Imbedding 19. Numerical Inversion of the Laplace Transform and Tychonov Regularization Appendix A. Linear Equations and Rank Appendix B. The Quadratic Form of Selberg Appendix C. A Method of Hermite Appendix D. Moments and Quadratic Forms Index.

65 citations

Journal ArticleDOI
TL;DR: In this paper, a new and useful geometric point of view for the understanding and analysis of certain matrix methods as they are used in statistics and econometrics is presented, and applications to statistical efficiency, parameter estimation, and correlation theory are given.

23 citations


Cites methods from "The infusion of matrices into stati..."

  • ...For an interesting recent account of the history of the introduction of matrix methods into statistics, see [26]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a general algebraic approach to relationships between OLSEs and BLUEs of the whole and partial mean parameter vectors in a constrained general linear model (CGLM) with fixed parameters is provided.
Abstract: The well-known ordinary least-squares estimators (OLSEs) and the best linear unbiased estimators (BLUEs) under linear regression models can be represented by certain closed-form formulas composed by the given matrices and their generalized inverses in the models. This paper provides a general algebraic approach to relationships between OLSEs and BLUEs of the whole and partial mean parameter vectors in a constrained general linear model (CGLM) with fixed parameters by using a variety of matrix analysis tools on generalized inverses of matrices and matrix rank formulas. In particular, it establishes a variety of necessary and sufficient conditions for OLSEs to be BLUEs under a CGLM, which include many reasonable statistical interpretations on the equalities of OLSEs and BLUEs of parameter space in the CGLM. The whole work shows how to effectively establish matrix equalities composed by matrices and their generalized inverses and how to use them when characterizing performances of estimators of parameter spaces in linear models under most general assumptions.

13 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of establishing additive matrix decompositions of estimators in the context of a general linear model with partial parameter restrictions, and demonstrate how to decompose best linear unbiased estimators (BLUEs) under the constrained general linear models (CGLM) as the sums of the estimators under submodels with parameter restrictions.
Abstract: Abstract A general linear model can be given in certain multiple partitioned forms, and there exist submodels associated with the given full model. In this situation, we can make statistical inferences from the full model and submodels, respectively. It has been realized that there do exist links between inference results obtained from the full model and its submodels, and thus it would be of interest to establish certain links among estimators of parameter spaces under these models. In this approach the methodology of additive matrix decompositions plays an important role to obtain satisfactory conclusions. In this paper, we consider the problem of establishing additive decompositions of estimators in the context of a general linear model with partial parameter restrictions. We will demonstrate how to decompose best linear unbiased estimators (BLUEs) under the constrained general linear model (CGLM) as the sums of estimators under submodels with parameter restrictions by using a variety of effective tools in matrix analysis. The derivation of our main results is based on heavy algebraic operations of the given matrices and their generalized inverses in the CGLM, while the whole contributions illustrate various skillful uses of state-of-the-art matrix analysis techniques in the statistical inference of linear regression models.

12 citations


Cites background from "The infusion of matrices into stati..."

  • ...As remarked in [10], a good starting point for the entry of matrices into statistics was in 1930s, while it is now a routine procedure to use given vectors, matrices and their generalized inverses in statistical models to formulate various estimators of parameter spaces in linear models and to make the corresponding statistical inferences....

    [...]

References
More filters
Book
01 Jan 1966
TL;DR: In this article, the Straight Line Case is used to fit a straight line by least squares, and the Durbin-Watson Test is used for checking the straight line fit.
Abstract: Basic Prerequisite Knowledge. Fitting a Straight Line by Least Squares. Checking the Straight Line Fit. Fitting Straight Lines: Special Topics. Regression in Matrix Terms: Straight Line Case. The General Regression Situation. Extra Sums of Squares and Tests for Several Parameters Being Zero. Serial Correlation in the Residuals and the Durbin--Watson Test. More of Checking Fitted Models. Multiple Regression: Special Topics. Bias in Regression Estimates, and Expected Values of Mean Squares and Sums of Squares. On Worthwhile Regressions, Big F's, and R 2 . Models Containing Functions of the Predictors, Including Polynomial Models. Transformation of the Response Variable. "Dummy" Variables. Selecting the "Best" Regression Equation. Ill--Conditioning in Regression Data. Ridge Regression. Generalized Linear Models (GLIM). Mixture Ingredients as Predictor Variables. The Geometry of Least Squares. More Geometry of Least Squares. Orthogonal Polynomials and Summary Data. Multiple Regression Applied to Analysis of Variance Problems. An Introduction to Nonlinear Estimation. Robust Regression. Resampling Procedures (Bootstrapping). Bibliography. True/False Questions. Answers to Exercises. Tables. Indexes.

18,952 citations

Book
14 Sep 1984
TL;DR: In this article, the distribution of the Mean Vector and the Covariance Matrix and the Generalized T2-Statistic is analyzed. But the distribution is not shown to be independent of sets of Variates.
Abstract: Preface to the Third Edition.Preface to the Second Edition.Preface to the First Edition.1. Introduction.2. The Multivariate Normal Distribution.3. Estimation of the Mean Vector and the Covariance Matrix.4. The Distributions and Uses of Sample Correlation Coefficients.5. The Generalized T2-Statistic.6. Classification of Observations.7. The Distribution of the Sample Covariance Matrix and the Sample Generalized Variance.8. Testing the General Linear Hypothesis: Multivariate Analysis of Variance9. Testing Independence of Sets of Variates.10. Testing Hypotheses of Equality of Covariance Matrices and Equality of Mean Vectors and Covariance Matrices.11. Principal Components.12. Cononical Correlations and Cononical Variables.13. The Distributions of Characteristic Roots and Vectors.14. Factor Analysis.15. Pattern of Dependence Graphical Models.Appendix A: Matrix Theory.Appendix B: Tables.References.Index.

9,693 citations

Book
01 Jan 1965
TL;DR: Algebra of Vectors and Matrices, Probability Theory, Tools and Techniques, and Continuous Probability Models.
Abstract: Algebra of Vectors and Matrices. Probability Theory, Tools and Techniques. Continuous Probability Models. The Theory of Least Squares and Analysis of Variance. Criteria and Methods of Estimation. Large Sample Theory and Methods. Theory of Statistical Inference. Multivariate Analysis. Publications of the Author. Author Index. Subject Index.

8,300 citations