scispace - formally typeset
Search or ask a question

Showing papers on "Matrix analysis published in 1969"


Book
01 Jan 1969

6,650 citations


Proceedings ArticleDOI
26 Aug 1969
TL;DR: A direct method of obtaining an automatic nodal numbering scheme to ensure that the corresponding coefficient matrix will have a narrow bandwidth is presented.
Abstract: The finite element displacement method of analyzing structures involves the solution of large systems of linear algebraic equations with sparse, structured, symmetric coefficient matrices. There is a direct correspondence between the structure of the coefficient matrix, called the stiffness matrix in this case, and the structure of the spatial network delineating the element layout. For the efficient solution of these systems of equations, it is desirable to have an automatic nodal numbering (or renumbering) scheme to ensure that the corresponding coefficient matrix will have a narrow bandwidth. This is the problem considered by R. Rosen1. A direct method of obtaining such a numbering scheme is presented. In addition several methods are reviewed and compared.

1,518 citations


Book
01 Jan 1969
TL;DR: In this article, the authors introduce geometric vectors and vector spaces, as well as linear transformations and matrix algebras, for solving equations and finding inverses in matrix algebra.
Abstract: 1. Matrix Algebra. 2. Some Simple Applications and Questions. 3. Solving Equations and Finding Inverses: Methods. 4. Solving Equations and Finding Inverses: Theory. 5. Vectors and Vector Spaces. 6. Introduction Geometrical Vectors. 7. Linear Transformations and Matrices. 8. Eigenvalues and Eigenvectors: An Overview. 9. Eigensystems of Symmetric Hermitian, and Normal Matrices, with Applications. 10. Eigensystems of General Matrices, with Applications. 11. Quadratic Forms and Variational Characterizations of Eigenvalues. 12. Linear Programming. Answers and Aids to Selected Problems. Bibliography. Index.

1,484 citations


Journal ArticleDOI
01 Jan 1969
TL;DR: In this paper, a class of discrete-time transfer function matrices termed discrete positive-real matrices is defined, and a system theoretic description of them is given, analogous to that for ordinary positive real matrices.
Abstract: A class of discrete-time transfer-function matrices termed discrete positive-real matrices is defined, and a system theoretic description of them is given, analogous to that for ordinary positive-real matrices. This description is applied to analysing the stability of a discrete-time system with linear forward part, and time-varying memoryless feedback.

182 citations


Journal ArticleDOI
TL;DR: In this paper, a unified treatment of the algebra of Stokes parameters and the coherency matrix is presented, and explicit formulas are given which relate the Jones and Mueller matrix formulations to themselves.
Abstract: A unified treatment of the algebra of Stokes parameters and the coherency matrix is presented. Explicit formulas are given which relate the Jones and Mueller matrix formulations to the coherency matrix, and to themselves. The general method involves the trace of products of matrices, and the algebra is both useful and capable of generalization.

81 citations


Book
01 Jan 1969
TL;DR: In this paper, the authors introduce the Matrix Calculus and the notion of Matrix Eigenvalues and Eigenvectors, as well as the concepts of linear systems and linear systems with constant coefficients.
Abstract: Matrices. Basic Concepts. Operations. Matrix Multiplication. Special Matrices. Submatrices and Partitioning. Vectors. The Geometry of Vectors. Simultaneous Linear Equations: Linear Systems. Solutions by Substitution. Gaussian Elimination. Pivoting Strategies. Linear Independence. Rank. Theory of Solutions. Appendix. The Inverse: Introduction. Calculating Inverses. Simultaneous Equations. Properties of the Inverse. LU Decomposition. Appendix. Determinants. Expansion by Confactors. Properties of Determinants. Pivotal Condensation. Inversion. Cramer's Rule. Appendix. Eigenvalues and Eigenvectors. Definitions. Eigenvalues. Eigenvectors. Properties of Eigenvalues and Eigenvectors. Linearly Independent Eigenvectors. Power Methods. Real Inner Products. Introduction. Orthonormal Vectors. Projections and QR Decompostions. The QR Algorithm. Least*b1Squares. Matrix Calculus. Well-Defined Functions. Cayley-Hamilton Theorem. Polynomials of Matrices--Distinct Eigenvalues. Polynomials of Matrices--General Case. Fuctions of a Matrix. The Function eAt. Complex Eigenvalues. Properties of eA. Derivatives of a Matrix. Appendix. Linear Differential Equations. Fundamental Form. Reduction of an nth Order Equation. Reduction of a System. Solutions of Systems with Constant Coefficients. Solutions of Systems--General Case. Appendix. Jordan Canonical Forms. Similar Matrices. Diagonalizable Matrices. Functions of Matrices--Diagonalizable Matrices. Generalized Eigenvectors. Chains. Canonical Basis. Jordan Canonical Forms. Functions of Matrices--General Case. The Function eAt. Appendix. Special Matrices. Complex Inner Product. Self-Adjoint Matrices. Real Symmetric Matrices. Orthogonal Matrices. Hermitian Matrices. Unitary Matrices. Summary. Positive Definite Matrices. Answers and Hints to Selected Problems. Index.

66 citations


Journal ArticleDOI
TL;DR: Nonnegative matrices characterization by diagonal products to imply properties of stochastic matrices was studied in this article, where diagonal products were used to obtain properties of nonnegative nonnegative matrix characterization.
Abstract: Nonnegative matrices characterization by diagonal products to imply properties of stochastic matrices

53 citations



Journal ArticleDOI
TL;DR: The systematic relaxation method is analysed for consistently ordered matrices as defined by Broyden (1964) and has a better asymptotic rate of convergence than S.O.R. and requires less calculations and computer store.
Abstract: A systematic relaxation method is analysed for consistently ordered matrices as defined by Broyden (1964). The method is a generalisation of successive over-relaxation (S.O.R.). A relation is derived between the eigenvalues of the iteration matrix of the method and the eigenvalues of the Jacobi iteration matrix. Forp-cyclic matrices, the method corresponds to using a special type of diagonal matrix instead of a single relaxation factor. For certain choices of this diagonal matrix, the method has a better asymptotic rate of convergence than S.O.R. and requires less calculations and computer store.

21 citations


Journal ArticleDOI
TL;DR: In this article, an algorithm is described to express a correlation matrix as the sum of two matrices, of which one is of rank less than its order, and the other, the residual matrix, is of prescribed structure.
Abstract: An algorithm is described, the purpose of which is to express a correlation matrix as the sum of two matrices, of which one is of rank less than its order, and the other, the residual matrix, is of prescribed structure. Applications are described to approximate simplex and circumplex matrices, to the factoring of groups of variables, and to a form of multi-mode factor analysis.

15 citations


Journal ArticleDOI
TL;DR: A method for computing the generalized inverse of a matrix is described, which makes use of elementary orthogonal matrices and the Gaussian elimination, and yields orthonormal bases for the ranges and the null spaces of the matrix and the generalizedverse.
Abstract: A method for computing the generalized inverse of a matrix is described, which makes use of elementary orthogonal matrices and theGaussian elimination. The method also yields orthonormal bases for the ranges and the null spaces of the matrix and the generalized inverse. Modifications of the method for the solution of simultaneous linear equations are given. Compact storage schemes, in the case of sparse matrices, are also described.

Journal ArticleDOI
TL;DR: P-Q norms as discussed by the authors is a generalized inverse concept of matrix derived from extension of Penrose best approximate solution of linear equations, which is used in the P-Q norm.
Abstract: P-Q norms generalized inverse concept of matrix derived from extension of Penrose best approximate solution of linear equations

Journal ArticleDOI
TL;DR: In this article, the first and second order density matrices of symmetry-projected single-determinant wavefunctions were derived for finite groups, in the case of finite groups.
Abstract: We derive the first‐ and second‐order density matrices of symmetry‐projected single‐determinant wavefunctions, in the case of finite groups. We also give the first‐order density matrices for functions projected in the axial‐rotation group. A method is presented for extracting from a density matrix its totally symmetric component, and the eigenfunctions of this component are shown to be the same after projection as before.

Journal ArticleDOI
TL;DR: A general method for evaluation of transition matrices is presented and involves determination of the inverse of the modified Vandermonde matrix followed by its post multiplication with the initial vector.
Abstract: A general method for evaluation of transition matrices is presented. The technique involves determination of the inverse of the modified Vandermonde matrix followed by its post multiplication with the initial vector.

Journal ArticleDOI
01 Jul 1969
TL;DR: In this article, a canonical form for real nonderogatory convergent matrices, such as the A matrices which occur in the description of linear discrete-time dynamical systems by vector-matrix difference equations of the form xk+1 = Axk + Buk, is proposed.
Abstract: Some applications of canonical matrices to linear continuous-time systems are reviewed. A canonical form is proposed for real nonderogatory convergent matrices, such as the A matrices which occur in the description of linear discrete-time dynamical systems by vector-matrix difference equations of the form xk+1 = Axk + Buk. The new canonical form is applied to the generation of particular and general solutions of the matrix equation ATLA − L = −K, which occurs in the application of Lyapunov theory to the analysis and design of such systems.

Journal ArticleDOI
H. Henami1
TL;DR: In this article, a matrix identity is derived by use of matrix projection operator's and their properties, which is a deviation from the usual matrix identity (1) in linear digital filtering and recursive estimation.
Abstract: In problems of linear digital filtering and recursive estimation, a matrix identity (1) is often met. Here this matrix identity is derived by use of matrix projection operator's and their properties. This deviation is novel, straightforward, and of tutorial value.

Journal ArticleDOI
TL;DR: The inversion of nonsingular matrices is considered, a method which starts with an arbitrary partitioning of the given matrix and can be applied to certain important classes of matrices, notably those that are “dominated by the diagonal.”
Abstract: The inversion of nonsingular matrices is considered. A method is developed which starts with an arbitrary partitioning of the given matrix. The separate submatrices are grouped into sets determined by the nonzero entries of some appropriate group, G, of permutation matrices. The group structure of G then establishes a sequence of operations on these sets of submatrices from which the corresponding representation of the inverse is obtained.Whether the method described is to be preferred to, say, Gauss's algorithm will depend on the capabilities that are required by other parts of the algorithm that is to be implemented in the special-purpose parallel computer. The basic speed, measured by the count of parallel multiplications and divisions, is comparable to that obtained with Gauss's algorithm and is slightly better under certain conditions. The principal difference is that this method uses primarily matrix multiplication, whereas Gauss's algorithm uses primarily row combinations. When the special-purpose computer under design must supply this capability anyway, the method developed here should be considered.Application of the process is limited to matrices for which we can set up a partitioning such that we can guarantee, a priori, that certain of the submatrices are nonsingular. Hence the method is not useful for arbitrary nonsingular matrices. However, it can be applied to certain important classes of matrices, notably those that are “dominated by the diagonal.” Noise covariance matrices are of this type; therefore the method can be applied to them. The inversion of a noise covariance matrix is required in some problems of optimal prediction and control. It is for applications of this sort that the method seems particularly attractive.

Journal ArticleDOI
D. Roberts1
TL;DR: In this article, a computer program for finding the minimal realizations of transfer function matrices is described, and an example is included to illustrate the procedure, which is based on the method described in this paper.
Abstract: A computer program for finding the minimal realizations of transfer function matrices is described. An example is included to illustrate the procedure.

Journal ArticleDOI
TL;DR: The main results of this paper deal with obtaining necessary and sufficient conditions on A in order to ensure the existence of P and P' so that A is equivalent to the canonical matrix with parameters (K, n), where A need not be circulant.

Journal ArticleDOI
TL;DR: In this paper, the structures of the frequently-occurring hyper-circulant and hyper-Jacobi matrices are examined, and it is shown how the calculation of any analytic function of such matrices may be reduced to the calculated functions of the submatrices.
Abstract: In the numerical analysis of physical problems, there often arise large matrices which exhibit certain kinds of block-symmetry when partitioned appropri- ately. In- this article, the structures of the frequently-occurring hyper-circulant and hyper-Jacobi matrices are examined, and it is shown how the calculation of any analytic function of such matrices may be reduced to the calculation of functions of the submatrices. Examples drawn from current engineering literature are given as well as small illustrative examples.- 1. Introduction. The coefficient matrices of many matrix equations (both algebraic and differential) which arise in engineering and chemistry may easily be partitioned into square submatrices of equal order. It is known that if the sub- matrices commute with each other, the solution of the given matrix equation can


Journal ArticleDOI
01 Nov 1969
TL;DR: In this article, an additional recursive property of companion matrices is derived and the two properties are combined to suggest a substantially improved algorithm for the evaluation of functions of a companion matrix and it is also indicated that by using this algorithm, the complex arithmetic operations stemming from the presence of complex eigenvalues in real matrices are reduced to a bare minimum.
Abstract: It was recently shown that companion matrices possess a recursive property which allows their function to be computed efficiently and accurately In this letter, an additional recursive property of companion matrices is derived These two properties are then combined to suggest a substantially improved algorithm for the evaluation of functions of companion matrices It is also indicated that by using this algorithm, the complex arithmetic operations stemming from the presence of complex eigenvalues in real matrices are reduced to a bare minimum




Journal ArticleDOI
01 Feb 1969
TL;DR: The following theorem has been conjectured and used in statistical applications, but, so far, has not been proved as discussed by the authors, but it has been used in many applications, e.g., in the dyadic form.
Abstract: LOUIS BRAND The following theorem has been conjectured and used in statistical applications, but, so far as known, has not been proved. Theorem. A and Bare real nXn symmetric matrices with\i, ■ ■ • ,Xr, 0, • • • , 0 and 0, • • • , 0, Xr+i, • • • , X„ (Xi^O) as eigenvalues respectively. If A +B has Xi, Xj, • • • , X„ as eigenvalues, then AB = 0. Proof. If C is a symmetric matrix having Xi, • • • , X„ as eigenvalues, then C can be put in the dyadic form [l]

Journal ArticleDOI
01 Jan 1969
TL;DR: In this article, it was shown that Brouwerwerwer's fixed point theorem does not hold for infinite matrices and a generalization of this theorem to the case of infinitely many matrices was given.
Abstract: 1. Let A = (aij) be an infinite matrix with positive elements aij > 0, i, j = 0, 1, *, (matrices (aij) with aij > 0 will be called in the sequel positive matrices). It was proved in [3 ], that (1) if A is a finite positive matrix, a unique doubly stochastic matrix T exists such that T=D1AD2 where D1 and D2 are diagonal matrices with all elements on the diagonal positive and are unique up to a scalar factor. The method used in [3], and introduced first in [4], is a constructive one and consists in alternate normalizing rows and columns of A and proving the convergence of this procedure. Another proof of (1) was given in [1]. This second proof uses besides Brouwer's fixed point theorem the fact, that (2) the set { x = (x0, x1, , xn); xi real numbers, E = 1 and x?>01 is homeomorphic to an n-dimensional ball. Although a purely existential one, this second proof contains a statement about the existence of directions of fixed points for some mapping defined by help of a finite matrix A. In this paper we note that statement (1) does not hold for infinite matrices and prove a theorem generalizing properly (1) to the case of infinite matrices. Essentially, both proofs in [1] and in [3] could be, with some nontrivial changes, applied to give the desired generalization. The difficulty in generalizing the proof given in [3 ] consists i.a. in the fact that for an infinite matrix Ej aij (or Es aij) is not always finite. The idea of our proof is similar to that of [1 ] except that (2) is not used and that Brouwer's theorem is replaced by the theorem of Schauder (see [2 ]). In the sequel a matrix A = (aij) with aij > O will be called a positive matrix and a diagonal matrix with positive diagonal elements will be called a positive diagonal matrix. Finally bij = 0, i 5j; 1, i =j, will denote the delta of Kronecker.

Journal ArticleDOI
TL;DR: In this article, it is shown that in a number of cases the above operation can be effectively carried out by the expansion of the matrix product in a series in the neighbourhood of matrices near those to be multiplied, but having a product which is comparatively easy to calculate.
Abstract: IN many problems both of a mathematical and practical nature it is necessary to multiply a considerable number of matrices of large dimensions. At the present time no acceptable and sufficiently general methods exist for such multiplication. The aim of the present paper is to show that in a number of cases the above operation can be effectively carried out by the expansion of the matrix product in a series in the neighbourhood of matrices near those to be multiplied, but having a product which is comparatively easy to calculate.