scispace - formally typeset
Search or ask a question

Showing papers on "Matrix analysis published in 1999"


Book
27 May 1999
TL;DR: In this paper, the authors present an elementary linear algebra review of the second edition of the Second Edition of the Basic Linear Algebra (BLA) and discuss the use of matrix polynomials and Canonical forms.
Abstract: Preface to the Second Edition.- Preface.- Frequently Used Notation and Terminology.- Frequently Used Terms.- 1 Elementary Linear Algebra Review.- 2 Partitioned Matrices, Rank, and Eigenvalues.- 3 Matrix Polynomials and Canonical Forms.- 4 Numerical Ranges, Matrix Norms, and Special Operations.- 5 Special Types of Matrices.- 6 Unitary Matrices and Contractions.- 7 Positive Semidefinite Matrices.- 8 Hermitian Matrices.- 9 Normal Matrices.- 10 Majorization and Matrix Inequalities.- References.- Notation.- Index.

806 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that by using singular value decomposition as a method for calculating the order matrices, principal frames and order parameters can be determined efficiently, even when a very limited set of experimental data is available.

550 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the class of matrices of input output output operators for discrete-time dependent descriptor linear systems, and the algebra of such operators is analyzed, and their multiplication and inversion algorithms of linear complexity are presented.
Abstract: In this paper we continue the study of structured matrices which admit a linear complexity inversion algorithm. The new class which is studied here appears naturally as the class of matrices of input output operators for discrete time dependent descriptor linear systems. The algebra of such operators is analyzed. Multiplication and inversion algorithms of linear complexity are presented and their implementation is illustrated.

174 citations


Journal ArticleDOI
TL;DR: In this paper, the moments of the characteristic determinants of random matrices are computed as limits, at coinciding points, of multi-point correlators of determinants, which are in fact universal in Dyson's scaling limit in which the difference between the points goes to zero, the size of the matrix goes to infinity, and their product remains finite.
Abstract: Number theorists have studied extensively the connections between the distribution of zeros of the Riemann $\zeta$-function, and of some generalizations, with the statistics of the eigenvalues of large random matrices. It is interesting to compare the average moments of these functions in an interval to their counterpart in random matrices, which are the expectation values of the characteristic polynomials of the matrix. It turns out that these expectation values are quite interesting. For instance, the moments of order 2K scale, for unitary invariant ensembles, as the density of eigenvalues raised to the power $K^2$ ; the prefactor turns out to be a universal number, i.e. it is independent of the specific probability distribution. An equivalent behaviour and prefactor had been found, as a conjecture, within number theory. The moments of the characteristic determinants of random matrices are computed here as limits, at coinciding points, of multi-point correlators of determinants. These correlators are in fact universal in Dyson's scaling limit in which the difference between the points goes to zero, the size of the matrix goes to infinity, and their product remains finite.

141 citations



Journal ArticleDOI
TL;DR: In this paper, the eigenvalues of sequences of tridiagonal matrices that contain a Toeplitz matrix in the upper left block were determined, and they were shown to be linear.

107 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied a class of structured matrices which admit a linear complexity inversion algorithm, which they called diagonal plus semiseparable matrices of order one and tridiagonal matrices.
Abstract: In this paper we continue the study of a class of structured matrices which admit a linear complexity inversion algorithm. This class contains in particular diagonal plus semiseparable matrices of order one and tridiagonal matrices. For this class explicit inversion formulas are obtained and linear complexity inversion algorithms are developed. The implementation of algorithms is illustrated by numerical experiments. The new class which is studied here appears naturally as the class of matrices of input output operators for descriptor linear systems.

32 citations


Book
01 Sep 1999
TL;DR: In this article, the authors present a review of the applicability of systems of linear equations in the context of matrix multiplication and linear transformations, as well as their properties, including the dimension of subspaces associated with a matrix coordinate system and the representation of linear operators.
Abstract: 1 Matrices, Vectors, and Systems of Linear Equations Matrices and Vectors Linear Combinations, Matrix-Vector Products, and Special Matrices Systems of Linear Equations Gaussian Elimination Applications of Systems of Linear Equations The Span of a Set Vectors Linear Dependence and Independence Chapter 1 Review 2 Matrices and Linear Transformations Matrix Multiplication Applications of Matrix Multiplication Invertibility and Elementary Matrices The Inverse of a Matrix The LU Decomposition of a Matrix Linear Transformations and Matrices Composition and Invertibility of Linear Transformations Chapter 2 Review 3 Determinants Cofactor Expansion Properties of Determinants Chapter 3 Review 4 Subspaces and Their Properties Subspaces Basis and Dimension The Dimension of Subspaces Associated with a Matrix Coordinate Systems Matrix Representations of Linear Operators Chapter 4 Review 5 Eigenvalues, Eigenvectors, and Diagonalization Eigenvalues and Eigenvectors The Characteristic Polynomial Diagonalization of Matrices Diagonalization of Linear Operators Applications of Eigenvalues Chapter 5 Review 6 Orthogonality The Geometry of Vectors Orthonormal Vectors Least-Squares Approximation and Orthogonal Projection Matrices Orthogonal Matrices and Operators Symmetric Matrices Singular Value Decomposition Rotations of R3 and Computer Graphics Chapter 6 Review 7 Vector Spaces Vector Spaces and their Subspaces Dimension and Isomorphism Linear Tranformations and Matrix Representations Inner Product Spaces Chapter 7 Review Appendix: Complex Numbers

31 citations


Journal ArticleDOI
TL;DR: The problem of determining the relation between the matrix analytic height and the graph theoretic level spectral properties of a matrix has been studied for about seventy years as discussed by the authors, and has been extended to general matrices over an arbitrary field.

22 citations


Proceedings ArticleDOI
02 Jun 1999
TL;DR: In this article, a sufficient condition in terms of quadratic stability of matrices for the class of regular descriptor linear systems which possess no infinite relative eigenvalues is presented, and the problem of stabilization of descriptor linear system via proportional-plus-derivative (PD) state feedback control is converted into two related quadratically stabilization problems, which can be easily solved using linear matrix inequality (LMI) techniques.
Abstract: A sufficient condition, in terms of quadratic stability of matrices, for the class of regular descriptor linear systems which possess no infinite relative eigenvalues is presented. Utilizing this result, the problem of stabilization of descriptor linear system via proportional-plus-derivative (PD) state feedback control is converted into two related quadratically stabilization problems, which can be easily solved using linear matrix inequality (LMI) techniques. Based on the same idea, robust stabilization via PD state feedback for uncertain descriptor linear systems, with coefficient matrices belonging to some compact sets in proper matrix spaces, has also been dealt with.

22 citations


Proceedings ArticleDOI
01 Jan 1999
TL;DR: In this paper, a linear matrix inequality based formulation of a discrete-time predictive filter that computes a minimal ellipsoid of confidence for the state of a linear system subject to structured and time-varying uncertainties in all the system matrices is presented.
Abstract: We provide an linear matrix inequality based formulation of a discrete-time predictive filter that computes a minimal ellipsoid of confidence for the state of a linear system subject to structured and time-varying uncertainties in all the system matrices.

Journal ArticleDOI
TL;DR: In this paper, a unified framework for the equations of Sylvester, Lyapunov and Riccati is introduced, which enables one to extend the convex invertible cone structure to all these three equations and explore related properties.

Journal ArticleDOI
TL;DR: In this paper, the path product matrices (PP) are introduced and a new class of nonnegative matrices, called the PP matrices are introduced, and the completion problem is solved for partial PP-matrices, unlike other properties inherited by principal submatrices.
Abstract: A new class of nonnegative matrices, called the path product (PP) matrices, is introduced. Every inverse M-matrix is PP and this fact gives transparent proofs of a number of facts about inverse M-matrices that hold in a broader setting. For n≤ 3 (and not greater) the (strict) PP matrices are exactly the inverse M-matrices. Finally, the PP matrices are studied, a number of properties given, and the completion problem is solved for partial PP matrices. Unlike other properties inherited by principal submatrices, there is no graphtheoretic restriction on completability of partial PP matrices.

Proceedings ArticleDOI
17 Oct 1999
TL;DR: Algorithms to multiply two vectors, a vector and a matrix, and two matrices on an OTIS-Mesh optoelectronic computer are developed and the relative merits of each compared.
Abstract: We develop algorithms to multiply two vectors, a vector and a matrix, and two matrices on an OTIS-Mesh optoelectronic computer. Two mappings, group row and group sub-mesh of a matrix onto an OTIS-Mesh are considered and the relative merits of each compared.

Journal ArticleDOI
TL;DR: In this article, the convergence domain of the sequence of resolvents can be described in terms of matrices involved in the representation, and conditions for the convergence of Chebyshev continued fractions on sets in the complex domain are established.
Abstract: The approximability of the resolvent of an operator induced by a band matrix by the resolvents of its finite-dimensional sections is studied. For bounded perturbations of self-adjoint matrices a positive result is obtained. The convergence domain of the sequence of resolvents can be described in this case in terms of matrices involved in the representation. This result is applied to tridiagonal complex matrices to establish conditions for the convergence of Chebyshev continued fractions on sets in the complex domain. In the particular case of compact perturbations this result is improved and a connection between the poles of the limit function and the eigenvalues of the tridiagonal matrix is established.

Posted Content
TL;DR: The causative-matrix method to analyze temporal change assumes that a matrix transforms one Markovian transition matrix into another by a left multiplication of the first matrix; the method is demand-driven when applied to input-output economics.
Abstract: The causative-matrix method to analyze temporal change assumes that a matrix transforms one Markovian transition matrix into another by a left multiplication of the first matrix; the method is demand-driven when applied to input-output economics. An extension is presented without assuming the demand-driven or supply-driven hypothesis. Starting from two flow matrices X and Y, two diagonal matrices are searched, one premultiplying and the second postmultiplying X, to obtain a result the closer as possible to Y by least squares. The paper proves that the method is deceptive because the diagonal matrices are unidentified and the interpretation of results is unclear.

Journal ArticleDOI
TL;DR: A generalization of the successive overrelaxation (SOR) method is introduced and the SOR theory on determination of the optimal parameter is extended to the generalized method to include a wide class of matrices.
Abstract: Stair matrices and their generalizations are introduced. Some properties of the matrices are presented. Like triangular matrices this class of matrices provides bases of matrix splittings for iterative methods. A remarkable feature of iterative methods based on the new class of matrices is that the methods are easily implemented for parallel computation. In particular, a generalization of the successive overrelaxation (SOR) method is introduced. The SOR theory on determination of the optimal parameter is extended to the generalized method to include a wide class of matrices. The asymptotic rate of convergence of the new method is derived for Hermitian positive definite matrices using bounds of the eigenvalues of Jacobi matrices and numerical radius. Finally, numerical tests are presented to corroborate the obtained results.

Journal ArticleDOI
TL;DR: In this article, it was shown that any square matrix over a field is the product of at most three triangular matrices, and explicit LUL factorizations for all 2×2 and 3×3 matrices over fields were given.

Proceedings ArticleDOI
07 Dec 1999
TL;DR: In this paper, the problem of output regulation for linear systems subject to input saturation was studied and it was shown that the use of nonlinear controllers sensibly enlarge-with respect to what can be obtained using a linear dynamic feedback-the set of initial conditions (of the plant and the exosystem) for which the problem is solvable.
Abstract: This paper studies the problem of output regulation for linear systems subject to input saturation. It is assumed that the dynamic matrix has anti-stable eigenvalues, that the disturbances affecting the plant are constant, sinusoidal or generically periodic signals with zero mean and that only the output of the plant is available for feedback. It is shown that the use of nonlinear controllers sensibly enlarge-with respect to what can be obtained using a linear dynamic feedback-the set of initial conditions (of the plant and the exosystem) for which the problem is solvable. An explicit expression of this set is also given.

Journal ArticleDOI
TL;DR: In this article, Druzkowski proved that cubic linear mappings are not sufficient to decide the Jacobian conjecture and developed an algorithm that translates the constant-Jacobian condition into algebraic equations in the matrix of parameters.

Journal ArticleDOI
TL;DR: Theoretical and algorithmic results for the numerical computation of real logarithms of nearby matrices are given in this article, where interpolation for sequences of invertible matrices is considered.

Proceedings Article
01 Jun 1999
TL;DR: This work presents and studies more sophisticated data allocation strategies that balance the load on heterogeneous 2D-grids with respect to the performance of the processors.
Abstract: We study the implementation of dense linear algebra computations, such as matrix multiplication and linear system solvers, on two-dimensional (2D) grids of heterogeneous processors. For these operations, 2D-grids are the key to scalability and efficiency. The uniform block-cyclic data distribution scheme commonly used for homogeneous collections of processors limits the performance of these operations on heterogeneous grids to the speed of the slowest processor. We present and study more sophisticated data allocation strategies that balance the load on heterogeneous 2D-grids with respect to the performance of the processors. The practical usefulness of these strategies is fully demonstrated by experimental data for a heterogeneous network of workstations.

Journal Article
TL;DR: This work characterized the linear operators that preserve zero-term rank of the m×n matrices over binary Boolean algebra.
Abstract: Zero-term rank of a matrix is the minimum number of lines (rows or columns) needed to cover all the zero entries of the given matrix. We characterized the linear operators that preserve zero-term rank of the m×n matrices over binary Boolean algebra

Journal ArticleDOI
TL;DR: In this paper, the authors considered the case of large matrices and proposed a method to partition the matrix into two blocks: a small block in which the stability is studied and a large block whose field of values is located in the complex plane.

Journal ArticleDOI
TL;DR: The possible dimensions of spaces of matrices over GF(2) whose nonzero elements all have rank 2 are investigated in this article, where the authors show that the non-zero elements can be represented by matrices with rank 2.
Abstract: The possible dimensions of spaces of matrices over GF(2) whose nonzero elements all have rank 2 are investigated

01 Jan 1999
TL;DR: In this paper, it was shown that Bell's inequalities are not sucient for matrices of order n 5, and necessary and sufficient conditions were given for a correlation matrix with order n 2 to be the correlation matrix of spin variables in the classical sense.
Abstract: SUMMARY. Necessary and sucient conditions are given for a correlation matrix of order n 2 to be the correlation matrix of spin variables in the classical sense. It is shown that Bell’s inequalities (1964) are not sucient for matrices of order n 5.

Journal ArticleDOI
TL;DR: In this article, the connection and transition matrices in Conley index theory for flows are introduced and basic definitions and simple examples are discussed, and simple connections and transitions are discussed.
Abstract: This paper is an introduction to connection and transition matrices in the Conley index theory for flows. Basic definitions and simple examples are discussed.

Journal ArticleDOI
TL;DR: The classification of AR matrices is discussed, their normal forms are defined, their simplest canonical forms are found, and all( K + 1) × K ARMatrices that are the most interesting matrices in the applications are characterized.

Proceedings ArticleDOI
01 Jun 1999
TL;DR: It follows that using matrix multiplications alone is not sufficient for obtaining a “fully” efficient (O(n) work) parallel solution, even for evaluating LACs, so a CREW PRAM algorithm is presented which, during execution, propagates computed values into future matrix products, and uses a special scheduling of the Matrix multiplications to reduce constant factors of the execution time.
Abstract: We define a new restricted type of algebraic circuit (AC) problems, called linear algebraic circuits (LACs), and consider the problem of obtaining efficient solutions for their parallel evaluation. The term "efficiency" indicates that for some fixed number of processors, the algorithm can compete with the sequential evaluation of ACs. While parallel evaluation of general ACs is P-complete, there are restricted types of ACs such as prefix-sums circuits (ACs in the form of chains) and arithmetic expressions (ACs in the form of trees) for which efficient parallel evaluation algorithms exist. In this work, we consider a restricted type, other than chains or trees, of ACs where at least one input of each multiplication must be a constant. We show that LACs can be evaluated in n/18p1/3 log2 p steps, where n is the size of the circuit and p < n is the number of processors. Thus, for suitable values of p, this algorithm can compete with the sequential evaluation of LACs and achieve speedups of about p1/3.It follows that LACs can be represented by a product of matrices. Thus, computing this product in parallel yields a possible evaluation algorithm for LACs. Parallel computation of a product of matrices is also the main "engine" used in existing parallel evaluation algorithms for ACs. We show, via a lower bound, that such a naive solution based on matrix multiplication alone, is inherently inefficient regardless of the order in which the product of the matrices is computed and hence a tool complex solution must be devised. Finally, useful applications for parallelizing sequential code are given.

Journal ArticleDOI
TL;DR: This work introduces the notion of block recursive matrix, and shows that an important property of scalar recursive matrices, namely, the product rule, also holds in the case of block matrices.