scispace - formally typeset
Search or ask a question

Showing papers on "Matrix analysis published in 1991"



Book
01 Jan 1991
TL;DR: The first book devoted exclusively to existence questions, constructive algorithms, enumeration questions, and other properties concerning classes of matrices of combinatorial significance is as mentioned in this paper, which is a natural sequel to the author's previous book Combinatorial Matrix Theory written with H. J. Ryser.
Abstract: A natural sequel to the author's previous book Combinatorial Matrix Theory written with H. J. Ryser, this is the first book devoted exclusively to existence questions, constructive algorithms, enumeration questions, and other properties concerning classes of matrices of combinatorial significance. Several classes of matrices are thoroughly developed including the classes of matrices of 0's and 1's with a specified number of 1's in each row and column (equivalently, bipartite graphs with a specified degree sequence), symmetric matrices in such classes (equivalently, graphs with a specified degree sequence), tournament matrices with a specified number of 1's in each row (equivalently, tournaments with a specified score sequence), nonnegative matrices with specified row and column sums, and doubly stochastic matrices. Most of this material is presented for the first time in book format and the chapter on doubly stochastic matrices provides the most complete development of the topic to date.

197 citations


Book
15 Mar 1991
TL;DR: Real Rational Vector Spaces and Rational Matrices as discussed by the authors have been used to model linear multivariable systems, and they have been shown to have proper and Omega-stable rational functions and matrices.
Abstract: Real Rational Vector Spaces and Rational Matrices. Polynomial Matrix Models of Linear Multivariable Systems. Pole and Zero Structure of Rational Matrices at Infinity. Dynamics of Polynomial Matrix Models. Proper and Omega-Stable Rational Functions and Matrices. Feedback System Stability and Stabilization. Some Algebraic Design Problems. Notations. Appendices. Index.

178 citations


Journal ArticleDOI
TL;DR: A collection of 45 parametrized test matrices is presented, which includes matrices with known inverses or known eigenvalues, ill-conditioned or rank deficient matrices, and symmetric, positive definite, orthogonal, defective, involuntary, and totally positive matrices.
Abstract: We present a collection of 45 parametrized test matrices. The matrices are mostly square, dense, nonrandom, and of arbitrary dimension. The collection includes matrices with known inverses or known eigenvalues, ill-conditioned or rank deficient matrices, and symmetric, positive definite, orthogonal, defective, involuntary, and totally positive matrices. For each matrix we give a MATLAB M-file that generates it.

64 citations



25 Mar 1991
TL;DR: The paper presents a survey of major computational methods for factorization applicable to discrete-time problems and the properties of the resulting procedures are discussed.
Abstract: The factorization of rational spectral matrices arises in the analysis and design of linear systems via transfer function techniques. The paper presents a survey of major computational methods for factorization applicable to discrete-time problems. The properties of the resulting procedures are discussed and guidelines are provided for the user.

44 citations


Journal ArticleDOI
TL;DR: This paper focuses on Hankel- and Vandermonde-like matrices and shows how the appropriately defined displacement structure yields fast triangular and orthogonal factorization algorithms for such matrices.

37 citations


Journal ArticleDOI
TL;DR: Recursive methods for generating conjugate directions with respect to an arbitrary matrix are investigated and it is shown that restructuring the bidiagonal matrices makes it possible to avoid zero divisors for the Hestenses - Stiefel type schemes.
Abstract: Recursive methods for generating conjugate directions with respect to an arbitrary matrix are investigated. There are three basic techniques to achieve this aim: (i) minimizing a quadratic form, (ii) generation by projections, and (iii) use of matrix equations. These techniques are equivalent to each other, however, the third one is stressed in this paper because of its versatility. Among matrix equation forms Hestenes - Stiefel type recursions and Lanczos type recursions are mentioned, where the recursion matrices are bidiagonal matrices in the simple case. With respect to the choice of recursion matrices, direct and reverse methods are introduced. The recursion matrices may have lower and upper triangular forms in the direct case and they may be lower and upper Hessenberg matrices in the reverse case. The recursion matrices chosen here are as simple as possible, actually they have no more nonzero elements than that of a bidiagonal matrix. Consequently, the storage of four vectors suffices to perform the recursions in all cases. It is shown that restructuring the bidiagonal matrices makes it possible to avoid zero divisors for the Hestenses - Stiefel type schemes.

32 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the conditions on A under which the set W(k:A) is convex or starshaped and obtained complete answers for hermitian matrices and partial results for general matrices.
Abstract: Let A be an n × n complex matrix. The kth matrix numerical range of A is the set W(k:A) = {X * AX:X is an n×k matrix such that X*X=Ik }. We study the conditions on A under which the set W(k:A) is convex or starshaped. We get complete answers for hermitian matrices and partial results for general matrices. Then we consider different matrix sets S such as the collection of all scalar matrices, hermitian matrices, normal matrices, etc., and investigate the following problems: does AeS imply XeS for all XeW(k:A); what can we say about A if all XeW(k:A) are elements of S; if what is the largest k such that Some possible variations of the problem and related results are also discussed.

26 citations


Journal ArticleDOI
TL;DR: In this article, the symmetrical condensed TLM (transmission line matrix) node was studied and an efficient algorithm for TLM analysis using the symmetric condensed node was developed. But the authors did not consider the case of lossy medium.
Abstract: A study of the symmetrical condensed TLM (transmission line matrix) node is presented. The study reveals not only that the characteristic scattering matrix satisfies the law of conservation of energy but also that electromagnetic fields are conserved even for finite node spacing. Using the results of this study, the authors develop an efficient algorithm for TLM analysis using the symmetrical condensed node. This algorithm significantly reduces the number of floating point operations so that the speed of computation is comparable to that of other expanded node analysis schemes. The case of a lossy medium is also discussed. With a better understanding of this symmetrical node and equipped with a fast algorithm, TLM analysis of three-dimensional electromagnetic problems can improve the computer-aided design of microwave and millimeter-wave circuits. >

25 citations


Book
01 Jan 1991
TL;DR: In this paper, the authors present an approach for the analysis of the structural properties of a set of different types of structural components, such as Rod Structures, Truss and Frame Analysis, and Space Frames.
Abstract: 1 Background and Scope.- 1.1 Structural Analysis.- 1.2 Types of Structures Considered.- 1.3 Mechanics of Structures.- 1.4 Degrees of Freedom.- 1.5 Time Varying Loads.- 1.6 Computers and Algorithms.- 1.7 Systems of Units.- Exercises.- 2 Rod Structures.- 2.1 Rod Theory.- 2.2 Rod Element Stiffness Matrix.- 2.3 Structural Stiffness Matrix.- 2.4 Boundary Conditions.- 2.5 Member Distributions and Reaction.- 2.6 Distributed Loads.- Problems.- Exercises.- 3 Beam Structures.- 3.1 Beam Theory.- 3.2 Beam Element Stiffness Matrix.- 3.3 Structural Stiffness Matrix.- 3.4 Equivalent Loads.- 3.5 Elastic Supports.- 3.6 Member Loads and Reactions.- Problems.- Exercises.- 4 Truss and Frame Analysis.- 4.1 Truss Analysis.- 4.2 Plane Frame Analysis.- 4.3 Space Frames.- 4.4 Determining the Rotation Matrix.- 4.5 Special Considerations.- 4.6 Substructuring.- Problems.- Exercises.- 5 Structural Stability.- 5.1 Elastic Stability.- 5.2 Stability of Truss Structures.- 5.3 Matrix Formulation for Truss Stability.- 5.4 Beams with Axial Forces.- 5.5 Beam Buckling.- 5.6 Matrix Analysis of Stability of Beams.- 5.7 Stability of Space Frames.- Problems.- Exercises.- 6 General Structural Principles I.- 6.1 Work and Strain Energy.- 6.2 Linear Elastic Structures.- 6.3 Virtual Work.- 6.4 Stationary Potential Energy.- 6.5 Ritz Approximate Analysis.- 6.6 The Finite Element Method.- 6.7 Stability Reconsidered.- Problems.- Exercises.- 7 Computer Methods I.- 7.1 Computers and Data Storage.- 7.2 Structural Analysis Programs.- 7.3 Node Renumbering.- 7.4 Solving Simultaneous Equations.- 7.5 Solving Eigenvalue Problems.- Problems.- Exercises.- 8 Dynamics of Elastic Systems.- 8.1 Harmonic Motion and Vibration.- 8.2 Complex Notation.- 8.3 Damping.- 8.4 Forced Response.- Problems.- Exercises.- 9 Vibration of Rod Structures.- 9.1 Rod Theory.- 9.2 Structural Connections.- 9.3 Exact Dynamic Stiffness Matrix.- 9.4 Approximate Matrix Formulation.- 9.5 Matrix Form of Dynamic Problems.- Problems.- Exercises.- 10 Vibration of Beam Structures.- 10.1 Spectral Analysis of Beams.- 10.2 Structural Connections.- 10.3 Exact Matrix Formulation.- 10.4 Approximate Matrix Formulation.- 10.5 Beam Structures Problems.- Problems.- Exercises.- 11 Modal Analysis of Frames.- 11.1 Dynamic Stiffness for Space Frames.- 11.2 Modal Matrix.- 11.3 Transformation to Principal Coordinates.- 11.4 Forced Damped Motion.- 11.5 The Modal Model.- 11.6 Dynamic Structural Testing.- 11.7 Structural Modification.- Problems.- Exercises.- 12 General Structural Principles II.- 12.1 Elements of Analytical Dynamics.- 12.2 Hamilton's Principle.- 12.3 Approximate Structural Theories.- 12.4 Lagrange's Equation.- 12.5 The Ritz Method.- 12.6 Ritz Method Applied to Discrete Systems.- 12.7 Rayleigh Quotient.- Problems.- Exercises.- 13 Computer Methods II.- 13.1 Finite Differences.- 13.2 Direct Integration Methods.- 13.3 Newmark's Method.- 13.4 Complete Solution of Eigensystems.- 13.5 Generalized Jacobi Method.- 13.6 Subspace Iteration.- 13.7 Selecting a Dynamic Solver.- Problems.- Exercises.- A Matrices and Linear Algebra.- A.1 Matrix Notation.- A.2 Matrix Operations.- A.3 Vector and Matrix Norms.- A.4 Determinants.- A.5 Solution of Simultaneous Equations.- A.6 Eigenvectors and Eigenvalues.- A.7 Vector Spaces.- B Spectral Analysis.- B.1 Continuous Fourier Transform.- B.2 Periodic Functions: Fourier series.- B.3 Discrete Fourier Transform.- B.4 Fast Fourier Transform Algorithm.- C Computer Source Code.- C.1 Compiling the Source Code.- C.2 Manual and Tutorial.- C.3 Source Code for STADYN.- C.4 Source Code from MODDYN.- References.

Journal ArticleDOI
TL;DR: In this article, a statical-kinematic stiffness matrix is introduced to detect simple first-order mechanisms, which is based on the resolution of a given load into two mutually orthogonal components.
Abstract: The analysis is based on the resolution of a given load into two mutually orthogonal components, which allows the introduction of a statical-kinematic stiffness matrix. The issue of statical-kinematic duality is explored and extended to second-order analysis. A computationally efficient matrix method for detecting simple first-order mechanisms is presented

Journal ArticleDOI
TL;DR: In this article, a lower bound for the maximum eigenvalue of symmetric positive semidefinite matrices was obtained for the non-symmetric case, where the angle of the matrix is known.

Journal ArticleDOI
TL;DR: In this paper, the problem of finding a mapping 2' assigning to each polynomial f of degree n a vector X(f) E C such that the matrix B := A - a is a companion matrix off is investigated.

Journal ArticleDOI
TL;DR: In this paper, it was shown that if two linear difference equations with invertible coefficient matrices are topologically equivalent and one of them has bounded coefficient matrix together with its inverse, then the coefficient matrix of the other equation is also bounded.


Journal ArticleDOI
TL;DR: A modification of the matrix sign function that does not suffer from the drawback of complex arithmetic and it is shown that the problem may be decomposed into smaller subproblems, which leads to considerable savings in computational work.

Journal ArticleDOI
TL;DR: In this article, an iterative procedure for computing the eigenvalues and eigenvectors of a class of specially structured Hermitian Toeplitz matrices is proposed.

Proceedings ArticleDOI
TL;DR: The notion of a heterogeneous matrix product is developed and it is shown how it generalizes the common matrix and vector products of linear algebra as well as the generalized forward and backward convolutions of image algebra.
Abstract: The notion of a heterogeneous matrix product is developed. After defining this product, the authors show how it generalizes the common matrix and vector products of linear algebra as well as the generalized forward and backward convolutions of image algebra. Some specific examples are provided.

Book
01 Dec 1991
TL;DR: In this article, the basic operations inverses and solutions of linear matrix equations determinants and their properties eigenvalues and eigenvectors of a square matrix linear spaces linear transformations and their property quadratic forms and geometric applications solving linear equations and finding eigen values.
Abstract: Matrices - basic operations inverses and solutions of linear matrix equations determinants and their properties eigenvalues and eigenvectors of a square matrix linear spaces linear transformations and their properties quadratic forms and geometric applications solving linear equations and finding eigenvalues.


Journal ArticleDOI
TL;DR: In this paper, it was shown that diagonally stable matrices have no imaginary eigenvalues and are inertia-preserving, and that irreducible, acyclic D-stable matrices are also inertia preserving.
Abstract: A real matrix A is inertia preserving if in $AD = \operatorname{in} D$, for every invertible diagonal matrix D. This class of matrices is a subset of the D-stable matrices and contains the diagonally stable matrices.In order to study inertia-preserving matrices, matrices that have no imaginary eigenvalues are characterized. This is used to characterize D-stability of stable matrices. It is also shown that irreducible, acyclic D-stable matrices are inertia preserving.


Journal ArticleDOI
TL;DR: In this paper, a conjecture involving exponentials of Hermitian matrices was proved when one of the matrices has rank at most one and the complete conjecture was shown to hold when the matrix is 2×2.
Abstract: A conjecture involving exponentials of Hermitian matrices is stated, and proved when one of the matrices has rank at most one. As a consequence, the complete conjecture is proved when the matrices are 2×2. A second conjecture involving exponentials of complex symmetric matrices is also stated, and completely proved when the matrices are 2×2. The two conjectures have similar structures but require quite different techniques to analyze.

Journal Article
TL;DR: In this article, the distribution of eigenvalues of 3×3 band matrices when two of the eigen values approach each other is found by directly evaluating the δ-function and making use of an expansion of hypergeometric function in terms of log
Abstract: The distribution of eigenvalues of 3×3 band matrices when two of the eigenvalues approach each other is found by directly evaluating the δ-function and making use of an expansion of hypergeometric function in terms of log

Journal ArticleDOI
TL;DR: In this article, it was shown that the fundamental time eigenvalue of the linear transport operator increases with the size of the system and that the largest eigen value of a non-negative irreducible matrix is increased whenever any matrix element is increased.
Abstract: It is shown that the fundamental time eigenvalue of the linear transport operator increases with the size of the system. This follows from the increase in the largest eigenvalue of a non-negative irreducible matrix whenever any matrix element is increased. This result of matrix analysis is generalised to more general Krein-Rutman operators that leave a cone of vectors invariant.

Journal ArticleDOI
TL;DR: In this article, a Hermitian band matrix matrix R is filled in as the Hermitians try to complete it to an invertible matrix F, even though all but one of the maximal submatrices in the band may be not invertable.

Proceedings ArticleDOI
11 Jun 1991
TL;DR: In this article, a unified summary on the formulation of the general feedback theory in terms of the coefficient matrices of the state equations is presented, where the feedback matrices with respect to the coefficient matrix of the State Equation for a single-input and single-output feedback network, and how they are related to the poles and zeros of the transfer function.
Abstract: Presents a unified summary on the formulation of the general feedback theory in terms of the coefficient matrices of the state equations. The author expresses the feedback matrices with respect to the coefficient matrix of the state equation for a single-input and single-output feedback network, and shows how they are related to the poles and zeros of the transfer function. The author extends these concepts to multiple-input, multiple-output and multiple-loop feedback networks, and derives expressions for the return difference matrix and the null return difference matrix and relations of the zeros and poles of their determinants to the eigenvalues of the coefficient matrices under the nominal condition and under the condition that the elements of interest vanish. >

Book ChapterDOI
Kevin Wu1
01 Jan 1991
TL;DR: In this paper, the fast matrix inversion of 4 × 4 matrices for 3D computer graphics is described. But it is not necessary for special types of matrices, and it is found that when speed is more important than code size, a graphics programmer can benefit greatly by identifying groups of matrics that have simple inverses, and providing a special matrix-inversion procedure for each type.
Abstract: Publisher Summary This chapter elaborates the fast matrix inversion. Performing matrix operations quickly is especially important when a graphics program changes the state of the transformation pipeline frequently. Matrix inversion can be relatively slow compared with other matrix operations because inversion requires careful checking, and handling to avoid numerical instabilities resulting from round-off error, and to determine when a matrix is singular. A general-purpose matrix inversion procedure is not necessary for special types of matrices. It is found that when speed is more important than code size, a graphics programmer can benefit greatly by identifying groups of matrices that have simple inverses, and providing a special matrix inversion procedure for each type. The inversion of 4 × 4 matrices for types that commonly arise in 3D computer graphics is described. A group is an algebraic object useful for characterizing symmetries and permutations.

Journal ArticleDOI
TL;DR: A general algorithm for the inversion of block matrices of low displacement rank is proposed, particularly adapted to the solution of the least squares linear prediction problem.
Abstract: A general algorithm for the inversion of block matrices of low displacement rank is proposed. It is particularly adapted to the solution of the least squares linear prediction problem. Some explicit formulas for the inverses of such matrices and a characterization of them are also provided. The method is based on the representation of the shift difference of the considered matrix as the difference between two low-rank positive matrices. >