Topic
Square matrix
About: Square matrix is a research topic. Over the lifetime, 5000 publications have been published within this topic receiving 92428 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, it was shown that the eigenprojection of a matrix i>A can be calculated with the use of any annihilating polynomial for i > Ai>u, where i>u ≥ ind i> A.
Abstract: Matrix theory and its applications make wide use of the eigenprojections of square matrices. The paper demonstrated that the eigenprojection of a matrix i>A can be calculated with the use of any annihilating polynomial for i>Ai>u, where i>u ≥ ind i>A. This enables one to establish the components and the minimum polynomial of i>A, as well as the Drazin inverse i>Ai>D.
28 citations
••
TL;DR: The main purpose of this correspondence is to establish an expression for the solution of the Lyapunov matrix equation in terms of the principal indempotents and nilpotents of the coefficient matrices.
Abstract: The main purpose of this correspondence is to establish an expression for the solution of the Lyapunov matrix equation in terms of the principal indempotents and nilpotents of the coefficient matrices. The coefficient matrices are not necessarily of the same size and may have common characteristic roots and have elements belonging to the field of complex numbers.
28 citations
••
18 Jun 2000TL;DR: A new version of RGEQRF and its accompanying SMP parallel counterpart is presented, implemented for a future release of the IBM ESSL library and represents a robust high-performance piece of library software for QR factorization on uniprocessor and multiprocessors systems.
Abstract: In [5,6], we presented algorithm RGEQR3, a purely recursive formulation of the QR factorization. Using recursion leads us to a natural way to choose the k-way aggregating Householder transform of Schreiber and Van Loan [10]. RGEQR3 is a performance critical subroutine for the main (hybrid recursive) routine RGEQRF for QR factorization of a general m × n matrix. This contribution presents a new version of RGEQRF and its accompanying SMP parallel counterpart, implemented for a future release of the IBM ESSL library. It represents a robust high-performance piece of library software for QR factorization on uniprocessor and multiprocessor systems. The implementation builds on previous results [5,6]. In particular, the new version is optimized in a number of ways to improve the performance; e.g., for small matrices and matrices with a very small number of columns. This is partly done by including mini blocking in the otherwise pure recursive RGEQR3. We describe the salient features of this implementation. Our serial implementation outperforms the corresponding LAPACK routine by 10-65% for square matrices and 10-100% on tall and thin matrices on the IBM POWER2 and POWER3 nodes. The tests covered matrix sizes which varied from very small to very large. The SMP parallel implementation shows close to perfect speedup on a 4-processor PPC604e node.
28 citations
••
01 Jan 1938TL;DR: In this paper, a matrix product H * A -1 K, where A is square and non-singular is not zero, is computed for determinant multiplication, and the matrix H * is obtained from H by transposition, that is, by changing rows into columns.
Abstract: The solution of simultaneous linear algebraic equations, the evaluation of the adjugate or the reciprocal of a given square matrix, and the evaluation of the bilinear or quadratic form reciprocal to a given form, are all special cases of a certain general operation, namely the evaluation of a matrix product H ′ A -1 K , where A is square and non-singular, that is, the determinant | A | is not zero. (Matrix multiplication is like determinant multiplication, but exclusively row-into-column. The matrix H ′ is obtained from H by transposition, that is, by changing rows into columns.) The matrices H ′ and K may be rectangular. If A is singular, the reciprocal A -1 does not exist; and in such a case the product H ′(adj A ) K may be required. Arithmetically, the only difference in the computation of H ′ A -1 K and H (adj A ) K is that in the latter case a final division of all elements by | A | is not performed.
28 citations
••
TL;DR: In this article, Rao et al. showed that for a general first-order design matrix A and a given dispersion matrix Σ of the n unknown net coordinates, the unique minimum Euclidean norm solution for the weight matrix P is P = (A′)+Σ−1A+ where A indicates the transpose and A+ the pseudoinverse of the rectangular matrix A.
Abstract: Let the configuration or first-order design matrix A of a geodetic net be given and let the weight or second-order design matrix P of m observations be unknown. For some predetermined choice of the dispersion matrix Σ of the n unknown net coordinates, the unique minimum Euclidean norm solution for the weight matrix P is P = (A′)+Σ−1A+. A′ indicates the transpose and A+ the pseudoinverse of the (in general) rectangular matrix A. For a general first-order design matrix A and a given Σ, there does not exist a solution for a diagonal positive definite weight matrix P. Our solution for P belongs to the first category of an optimal design of C. R. Rao. The designs P = I and P = (A′)+Σ−1A+ yield the same best linear unbiased estimator for the unknowns, but the variance covariance matrix for these designs is different. Examples, especially the structure by G. I. Taylor and T. Karman for the homogeneous and isotropic geodetic net, illustrate theoretical results.
28 citations