scispace - formally typeset
Search or ask a question
Book

Generalized inverses: theory and applications

TL;DR: In this paper, the Moore of the Moore-Penrose Inverse is described as a generalized inverse of a linear operator between Hilbert spaces, and a spectral theory for rectangular matrices is proposed.
Abstract: * Glossary of notation * Introduction * Preliminaries * Existence and Construction of Generalized Inverses * Linear Systems and Characterization of Generalized Inverses * Minimal Properties of Generalized Inverses * Spectral Generalized Inverses * Generalized Inverses of Partitioned Matrices * A Spectral Theory for Rectangular Matrices * Computational Aspects of Generalized Inverses * Miscellaneous Applications * Generalized Inverses of Linear Operators between Hilbert Spaces * Appendix A: The Moore of the Moore-Penrose Inverse * Bibliography * Subject Index * Author Index
Citations
More filters
Book
01 Apr 1988
TL;DR: In this article, the authors discuss the properties of Vectors and Matrices, the Vec-Operator, the Moore-Penrose Inverse Miscellaneous Matrix Results, and the Linear Regression Model.
Abstract: Preface MATRICES: Basic Properties of Vectors and Matrices Kronecker Products, the Vec-Operator and the Moore- Penrose Inverse Miscellaneous Matrix Results DIFFERENTIALS: THE THEORY: Mathematical Preliminaries Differentials and Differentiability The Second Differential Static Optimization DIFFERENTIALS: THE PRACTICE: Some Important Differentials First- Order Differentials and Jacobian Matrices Second-Order Differentials and Hessian Matrices INEQUALITIES: Inequalities THE LINEAR MODEL: Statistical Preliminaries The Linear Regression Model Further Topics in the Linear Model APPLICATIONS TO MAXIMUM LIKELIHOOD ESTIMATION: Maximum Likelihood Estimation Simultaneous Equations Topics in Psychometrics Subject Index Bibliography.

2,868 citations


Cites background from "Generalized inverses: theory and ap..."

  • ...The interested reader may wish to consult Rao and Mitra (1971), Pringle and Rayner (1971), Boullion and Odell (1971), or Ben-Israel and Greville (1974)....

    [...]

  • ...The interested reader may wish to consult Rao and Mitra (1971), Pringle and Rayner (1971), Boullion and Odell (1971), or Ben-Israel and Greville (1974). §9....

    [...]

  • ...The interested reader may wish to consult Rao and Mitra (1971), Pringle and Rayner (1971), Boullion and Odell (1971), or Ben-Israel and Greville (1974). §9. The results in this section are due to Penrose (1956)....

    [...]

Book
15 Jun 2001
TL;DR: The Time Scales Calculus as discussed by the authors is a generalization of the time-scales calculus with linear systems and higher-order linear equations, and it can be expressed in terms of linear Symplectic Dynamic Systems.
Abstract: Preface * The Time Scales Calculus * First Order Linear Equations * Second Order Linear Equations * Self-Adjoint Equations * Linear Systems and Higher Order Equations * Dynamic Inequalities * Linear Symplectic Dynamic Systems * Extensions * Solutions to Selected Problems * Bibliography * Index

2,581 citations

Journal ArticleDOI
TL;DR: The model, which nicely fits into the so-called "statistical relational learning" framework, could also be used to compute document or word similarities, and could be applied to machine-learning and pattern-recognition tasks involving a relational database.
Abstract: This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on a Markov-chain model of random walk through the database. More precisely, we compute quantities (the average commute time, the pseudoinverse of the Laplacian matrix of the graph, etc.) that provide similarities between any pair of nodes, having the nice property of increasing when the number of paths connecting those elements increases and when the "length" of paths decreases. It turns out that the square root of the average commute time is a Euclidean distance and that the pseudoinverse of the Laplacian matrix is a kernel matrix (its elements are inner products closely related to commute times). A principal component analysis (PCA) of the graph is introduced for computing the subspace projection of the node vectors in a manner that preserves as much variance as possible in terms of the Euclidean commute-time distance. This graph PCA provides a nice interpretation to the "Fiedler vector," widely used for graph partitioning. The model is evaluated on a collaborative-recommendation task where suggestions are made about which movies people should watch based upon what they watched in the past. Experimental results on the MovieLens database show that the Laplacian-based similarities perform well in comparison with other methods. The model, which nicely fits into the so-called "statistical relational learning" framework, could also be used to compute document or word similarities, and, more generally, it could be applied to machine-learning and pattern-recognition tasks involving a relational database

1,276 citations


Cites background from "Generalized inverses: theory and ap..."

  • ...A thorough treatment of matrix pseudoinverses and their applications can be found in [6]....

    [...]

BookDOI
22 Nov 2017
TL;DR: For some reasons, this unified algebraic approach to control design tends to be the representative book in this website.
Abstract: 1. Introduction 2. Linear Algebra Review 3. Analysis of First-order Information 4. Second-order Information in Linear Systems 5. Covariance Controllers 6.Covariance Upper Boundary Controllers 7. H-Controllers 8. Model Reduction 9. Unified Perspective 10. Projection Methods 11. Successive Centring Methods 12. A: Linear Algebra Basics 13. B: Calculus of Vectors and Matrices 14. C: Balanced Model Reduction

1,119 citations

Journal ArticleDOI
TL;DR: Using an extension of Pierra's product space formalism, it is shown here that a multiprojection algorithm converges and is fully simultaneous, i.e., it uses in each iterative stepall sets of the convex feasibility problem.
Abstract: Generalized distances give rise to generalized projections into convex sets. An important question is whether or not one can use within the same projection algorithm different types of such generalized projections. This question has practical consequences in the area of signal detection and image recovery in situations that can be formulated mathematically as a convex feasibility problem. Using an extension of Pierra's product space formalism, we show here that a multiprojection algorithm converges. Our algorithm is fully simultaneous, i.e., it uses in each iterative stepall sets of the convex feasibility problem. Different multiprojection algorithms can be derived from our algorithmic scheme by a judicious choice of the Bregman functions which govern the process. As a by-product of our investigation we also obtain blockiterative schemes for certain kinds of linearly constraned optimization problems.

1,085 citations

References
More filters
Book
01 Jan 1873
TL;DR: The most influential nineteenth-century scientist for twentieth-century physics, James Clerk Maxwell (1831-1879) demonstrated that electricity, magnetism and light are all manifestations of the same phenomenon: the electromagnetic field as discussed by the authors.
Abstract: Arguably the most influential nineteenth-century scientist for twentieth-century physics, James Clerk Maxwell (1831–1879) demonstrated that electricity, magnetism and light are all manifestations of the same phenomenon: the electromagnetic field. A fellow of Trinity College Cambridge, Maxwell became, in 1871, the first Cavendish Professor of Physics at Cambridge. His famous equations - a set of four partial differential equations that relate the electric and magnetic fields to their sources, charge density and current density - first appeared in fully developed form in his 1873 Treatise on Electricity and Magnetism. This two-volume textbook brought together all the experimental and theoretical advances in the field of electricity and magnetism known at the time, and provided a methodical and graduated introduction to electromagnetic theory. Volume 2 covers magnetism and electromagnetism, including the electromagnetic theory of light, the theory of magnetic action on light, and the electric theory of magnetism.

9,565 citations

Book
01 Jun 1970
TL;DR: In this article, the authors present a list of basic reference books for convergence of Minimization Methods in linear algebra and linear algebra with a focus on convergence under partial ordering.
Abstract: Preface to the Classics Edition Preface Acknowledgments Glossary of Symbols Introduction Part I. Background Material. 1. Sample Problems 2. Linear Algebra 3. Analysis Part II. Nonconstructive Existence Theorems. 4. Gradient Mappings and Minimization 5. Contractions and the Continuation Property 6. The Degree of a Mapping Part III. Iterative Methods. 7. General Iterative Methods 8. Minimization Methods Part IV. Local Convergence. 9. Rates of Convergence-General 10. One-Step Stationary Methods 11. Multistep Methods and Additional One-Step Methods Part V. Semilocal and Global Convergence. 12. Contractions and Nonlinear Majorants 13. Convergence under Partial Ordering 14. Convergence of Minimization Methods An Annotated List of Basic Reference Books Bibliography Author Index Subject Index.

7,669 citations

Journal ArticleDOI
TL;DR: In this paper, a method of estimating the parameters of a set of regression equations is reported which involves application of Aitken's generalized least-squares to the whole system of equations.
Abstract: In this paper a method of estimating the parameters of a set of regression equations is reported which involves application of Aitken's generalized least-squares [1] to the whole system of equations. Under conditions generally encountered in practice, it is found that the regression coefficient estimators so obtained are at least asymptotically more efficient than those obtained by an equation-by-equation application of least squares. This gain in efficiency can be quite large if “independent” variables in different equations are not highly correlated and if disturbance terms in different equations are highly correlated. Further, tests of the hypothesis that all regression equation coefficient vectors are equal, based on “micro” and “macro” data, are described. If this hypothesis is accepted, there will be no aggregation bias. Finally, the estimation procedure and the “micro-test” for aggregation bias are applied in the analysis of annual investment data, 1935–1954, for two firms.

7,637 citations

Book
01 Jan 1969

6,650 citations

Journal ArticleDOI
TL;DR: In this paper, a solution for the least square problem with respect to matrices of less than full column rank is presented. But this solution is applicable to only matrices A and B and is not applicable to all matrices.
Abstract: A solutionT of the least-squares problemAT=B +E, givenA andB so that trace (E′E)= minimum andT′T=I is presented. It is compared with a less general solution of the same problem which was given by Green [5]. The present solution, in contrast to Green's, is applicable to matricesA andB which are of less than full column rank. Some technical suggestions for the numerical computation ofT and an illustrative example are given.

1,924 citations