scispace - formally typeset
Search or ask a question

Showing papers on "Matrix analysis published in 1970"


Journal ArticleDOI
Shmuel Winograd1
TL;DR: A new algorithm for matrix multiplication, which requires about 1/2(n cubed) multiplications, is obtained following the results of Pan Motzkin about polynomial evaluation and the product of a matrix by vector.
Abstract: : The number of multiplications and divisions required in certain computations is investigated. In particular, results of Pan Motzkin, about polynomial evaluation as well as similar results about the product of a matrix by vector, are obtained. As an application of the results on the product of a matrix by vector, a new algorithm for matrix multiplication, which requires about 1/2(n cubed) multiplications, is obtained.

152 citations


Journal ArticleDOI
TL;DR: This study suggests quantitatively how rapidly sparse matrices fill up for increasing densities, and emphasizes the necessity for reordering to minimize fill-in.
Abstract: A comparison in the context of sparse matrices is made between the Product Form of the Inverse PFI (a form of Gauss-Jordan elimination) and the Elimination Form of the Inverse EFI (a form of Gaussian elimination). The precise relation of the elements of these two forms of the inverse is given in terms of the nontrivial elements of the three matrices L, U, U-1 associated with the triangular factorization of the coefficient matrix A; i.e., A = L. U, where L is lower triangular and U is unit upper triangular. It is shown that the zero- nonzero structure of the PFI always has more nonzeros than the EFI. It is proved that Gaussian elimination is a minimal algorithm with respect to preserving sparseness if the diagonal elements of the matrix A are nonzero. However, Gaussian elimination is not nec- essarily minimal if A has some zero diagonal elements. The same statements hold for the PFI as well. A probabilistic study of fill-in and computing times for the PFI and EF sparse ma- trix algorithms is presented. This study suggests quantitatively how rapidly sparse matrices fill up for increasing densities, and emphasizes the necessity for reordering to minimize fill-in. I. Introduction. A sparse matrix is a matrix with very few nonzero elements. In many applications, a rough rule seems to be that there are O(N) nonzero entries; typically, say 2 to 10 nonzero entries per row. If the dimension N of the matrix is not large, then there is no compelling reason to treat sparse matrices differently from full matrices. It is when N becomes large and one attempts computations with the sparse matrix that it becomes necessary to take advantage of the zeros. The reason for this is obvious: there is a storage requirement of the order of N2 and arithmetic operations count of the order of N3 for many matrix algorithms using the full matrix. On the other hand, by storing only nonzero quantities and using logical operations to decide when an arithmetic operation is necessary, the storage requirement and arithmetic operations count can be reduced by a factor of N in many instances. Of course, this not only becomes a sizable savings of computer time, but also dictates whether or not some problems can be attempted. Computations with sparse matrices are not new. Iterative techniques for these matrices, especially those related to the solution of partial differential equations have been extensively developed (1). Sparse matrix methods for solving linear equa- tions by direct methods have been used for a long time in linear programming and there is a large body of literature, computational experience, programs, and artfulness, which has been built up in this area (2)-(15). In most linear programming codes, the product form of the inverse (PFI) is the method used to solve linear equations (16)-(19), although there are exceptions (20)-(21). Methods for scaling, pivoting for accuracy and sparseness, structuring data, and handling input-output have been extensively developed (22)-(72). However, there do not seem to exist rigorous results

63 citations


Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions for the eigenvalues of a real matrix to lie within a certain region of the complex plane are given for a given matrix to have a certain eigenvalue.
Abstract: Necessary and sufficient conditions are found for the eigenvalues of a real matrix to lie within a certain region of the complex plane.

47 citations


Book ChapterDOI
TL;DR: In this article, the symmetry properties of reduced density matrices and natural p -states resulting from a given symmetry behavior of the wave functions, from which the density matrix are constructed, are discussed.
Abstract: Publisher Summary This chapter discusses symmetry properties of reduced density matrices and natural p -states resulting from a given symmetry behavior of the wave functions, from which the density matrices are constructed. Symmetry properties of p- densities are defined as the diagonal elements of the corresponding density matrices. Reduced density matrices have received increasing interest in quantum-chemical investigations. On one hand, numerical first- and second-order density matrices (one- and two-particle density matrices) for certain states of simple atomic and molecular systems have been calculated starting from rather good approximate wave functions. These matrices are particularly useful for testing the validity of different wave functions of the same system. Different approximations are most conveniently compared in terms of the eigenstates of these matrices— that is, the natural spin-orbitals and natural spin-geminals as well as the corresponding eigenvalues, the occupation numbers. On the other hand, the general properties of these matrices and their eigenstates— that is, those properties that are independent of the nature of the wave functions used in their construction, are especially interesting and some effort has been spent on studying them.

29 citations


Journal ArticleDOI
TL;DR: A new method for evaluating the exponential of a matrix with distinct eigenvalues in closed form using the Vandermonde matrix and is in many respects superior to other known methods.
Abstract: A new method for evaluating the exponential of a matrix with distinct eigenvalues in closed form is presented This technique uses the Vandermonde matrix and is in many respects superior to other known methods

25 citations


Journal ArticleDOI
TL;DR: Gaussian elimination and subsequent back substitution is presented for its implementation to minimize the amount of central computer memory required; provide a more flexible means of manipulating large matrices; and dramatically reduce computer time.
Abstract: The most efficient means of solving most systems of linear equations arising in structural analysis is by Gaussian elimination and subsequent back substitution. A method is presented for its implementation. Direct solutions are obtained with sparse matrix factors which preserve the operations of the Gaussian elimination for repeat solutions. The method together with techniques for its application are graphically described. Its use in many types of engineering problems requiring solutions to systems of linear equations will: (1) Minimize the amount of central computer memory required; (2) provide a more flexible means of manipulating large matrices; and (3) dramatically reduce computer time.

21 citations


01 Aug 1970
TL;DR: Theoretic and experimental results on method of conjugate gradients for non linear programming and relations between properties of the graph and the eigenvalues of its adjacency matrix are presented.
Abstract: : Contents: Integer and linear programming; Theoretic and experimental results on method of conjugate gradients for non linear programming; Iterative procedures of finding roots of functions; Bounds for eigenvalues and complex matrices; Relations between properties of the graph and the eigenvalues of its adjacency matrix; and Sparse matrices.

17 citations


Journal ArticleDOI
01 Mar 1970

10 citations


Book
01 Jan 1970
TL;DR: In this paper, a thorough discussion of systems of linear equations and their solutions is presented, with many illustrative examples, and the student is led to view the mathematical content intuitively, as an aid to understanding.
Abstract: This volume presents a thorough discussion of systems of linear equations and their solutions Vectors and matrices are introduced as required and an account of determinants is given Great emphasis has been placed on keeping the presentation as simple as possible, with many illustrative examples While all mathematical assertions are proved, the student is led to view the mathematical content intuitively, as an aid to understandingThe text treats the coordinate geometry of lines, planes and quadrics, provides a natural application for linear algebra and at the same time furnished a geometrical interpretation to illustrate the algebraic concepts

10 citations


Journal ArticleDOI
TL;DR: In this article, the problem of expressing the (2s + 1) ×(2s+ 1) covariantly defined matrices Sμνρ… (used in theories of particles with spin) in terms of the angularmomentum matrices s is shown to be equivalent to finding a certain type of polynomial, and explicit expressions for the polynomials, recursion formulas, and differentiation properties are given.
Abstract: The problem of expressing the (2s + 1) × (2s + 1) covariantly defined matrices Sμνρ… (used in theories of particles with spin) in terms of the angular‐momentum matrices s is shown to be equivalent to the problem of finding a certain type of polynomial. Explicit expressions for the polynomials, recursion formulas, and differentiation properties are given.

9 citations


Journal ArticleDOI
TL;DR: In this paper, a simple closed-loop system-matrix assignment method is presented, which also permits the system matrix to possess specific desired eigenvalues of any multiplicity, and also allows the matrix to be assigned to any number of multiplicity.
Abstract: A simple method of closed-loop system-matrix assignment is presented which also permits the system matrix to possess specific desired eigenvalues of any multiplicity.

Journal ArticleDOI
TL;DR: In this article, it was shown that a pair of complex matrices is monotone with respect to the pair of closed convex cones (in the corresponding complex spaces) if and only if
Abstract: Such matrices and operators are of great importance in numerical analysis [2, Chap. 3]. For m = n, (1) is equivalent to the existence and nonnegativity of A-1 [2, p. 376]. Rectangular real matrices of monotone kind were studied in [3] where, inter alia, (1) was proved equivalent to the existence of a nonnegative left inverse of A. These results are extended below to complex matrices. A pair {A, B} of complex matrices is monotone with respect to the pair {S, T} of closed convex cones (in the corresponding complex spaces) if

Journal ArticleDOI
TL;DR: This paper investigates how the special structure of matrices can be described and utilized for efficient computing by saving memory space and superfluous operations.
Abstract: A matrix calculus is introduced with the intention of developing data structures suitable for a high level algorithmic language for mathematical programming. The paper investigates how the special structure of matrices can be described and utilized for efficient computing by saving memory space and superfluous operations.Sequences of matrices (and sequences of sequences of matrices) are considered, and matrix operators are extended to sequence operators and cumulative operators.Algorithms are given which use symbol manipulation of matrix expressions so as to find the forms best suited for computation. These forms are called normal forms. Several completeness results are obtained in the sense that for each expression an equivalent expression in normal form can be found within a specified calculus.


Journal ArticleDOI
A. Simpson1
TL;DR: In this article, a recurrence relation is derived which facilitates such calculations while, at the same time, providing insight into the structure of the inverse matrix of a lambda matrix of the form Q(s) + BG.
Abstract: Matrices of the form Q(s) + BG, Q being a lambda matrix, occur frequently in control applications, and it is often required to invert them, as, for example, in response calculations. Using a variant of a method proposed by the author as a means for solving the mode-control problem, a recurrence relation is derived which facilitates such calculations while, at the same time, providing insight into the structure of the inverse.

01 Jul 1970
TL;DR: In this article, two classes of matrices which are useful in testing numerical algorithms for problems in linear algebra are presented, and their determinants, eigenvalues and eigenvectors are deduced in analytic forms.
Abstract: : Two classes of matrices which are useful in testing numerical algorithms for problems in linear algebra are presented. Their determinants, eigenvalues, eigenvectors and inverses are deduced in analytic forms. These are easily generated by machine or by hand as functions of certain parameters in the original matrices. By appropriately selecting the parameters one is able to generate ill-conditioned matrices without having to deal with high order matrices. (Author)


01 Mar 1970
TL;DR: Triangular decomposition as aid in determining eigenvalues of large-order banded symmetric matrices was used in this article for determining the eigenvalue of large order banded matrices.
Abstract: Triangular decomposition as aid in determining eigenvalues of large-order banded symmetric matrices

Journal ArticleDOI
TL;DR: In this article, a new subclass of decoupling matrices is defined where the rows of the output matrix C are eigenvectors of the desired plant matrix (A + BF) and coefficients of the feedback matrix F are shown to be linear functions of m closed-loop system poles.
Abstract: A new subclass,o of decoupling matrices is defined where the rows of the output matrix C are eigenvectors of the desired plant matrix (A + BF) The coefficients of the feedback matrix F are shown to be linear functions of m closed-loop system poles