scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The sign matrix and the separation of matrix eigenvalues

01 Feb 1983-Linear Algebra and its Applications (North-Holland)-Vol. 49, pp 221-232
TL;DR: In this paper, the sign matrices uniquely associated with the matrices (M − ζ j I ) 2, where the corners of a rectangle oriented at π /4 to the axes of a Cartesian coordinate system, were used to compute the number of eigenvalues of the arbitrarily chosen matrix M which lie within the rectangle, and to determine the left and right invariant subspaces of M associated with these eigen values.
About: This article is published in Linear Algebra and its Applications.The article was published on 1983-02-01 and is currently open access. It has received 44 citations till now. The article focuses on the topics: Square root of a 2 by 2 matrix & Integer matrix.
Citations
More filters
Journal ArticleDOI
TL;DR: This work discusses basic principles of paralled processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms, and presents direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigene value problem, and the singular value decomposition.
Abstract: We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, the singular value decomposition, and generalizations of these to two matrices. We consider dense, band and sparse matrices.

217 citations


Cites background or methods from "The sign matrix and the separation ..."

  • ...A globally, asymptotically quadratically convergent iteration to compute the sign-function of B is Bi+1 = (Bi + B 1 i )=2 [108, 168, 190]; this is simply Newton's method applied to B(2) = I , and can also be seen to equivalent to repeated squaring (the power method) of the Cayley transform of B....

    [...]

  • ...One such function f is the sign function [13, 108, 120, 135, 168, 190] which maps points with positive real part to +1 and those with negative real part to 1; adding 1 to this function then maps eigenvalues in the right half plane to 2 and in the left plane to 0, as desired....

    [...]

  • ...By working on a shifted and squared real matrix, one can divide along lines at an angle of =4 and retain real arithmetic [13, 108, 190]....

    [...]

Journal ArticleDOI
TL;DR: In this article, the matrix-sign-function algorithm for algebraic Riccati equations is improved by a simple reorganization that changes nonsymmetric matrix inversions into symmetric matrix inverse inversions.

212 citations

Journal ArticleDOI
TL;DR: A survey of the matrix sign function is presented including some historical background, definitions and properties, approximation theory and computational methods, and condition theory and estimation procedures.
Abstract: A survey of the matrix sign function is presented including some historical background, definitions and properties, approximation theory and computational methods, and condition theory and estimation procedures, Applications to areas such as control theory, eigendecompositions, and roots of matrices are outlined, and some new theoretical results are also given. >

210 citations

Book ChapterDOI
01 Jan 1991
TL;DR: An overview is given of progress over the past ten to fifteen years towards reliable and efficient numerical solution of various types of Riccati equations.
Abstract: In this tutorial paper, an overview is given of progress over the past ten to fifteen years towards reliable and efficient numerical solution of various types of Riccati equations. Our attention will be directed primarily to matrix-valued algebraic Riccati equations and numerical methods for their solution based on computing bases for invariant subspaces of certain associated matrices. Riccati equations arise in modeling both continuous-time and discrete-time systems in a wide variety of applications in science and engineering. One can study both algebraic equations and differential or difference equations. Both algebraic and differential or difference equations can be further classified according to whether their coefficient matrices give rise to so-called symmetric or nonsymmetric equations. Symmetric Riccati equations can be further classified according to whether or not they are definite or indefinite.

145 citations

Journal ArticleDOI
TL;DR: An inverse-free, highly parallel, spectral divide and conquer algorithm that can compute either an invariant subspace of a nonsymmetric matrix, or a pair of left and right deflating subspaces of a regular matrix pencil.
Abstract: We discuss an inverse-free, highly parallel, spectral divide and conquer algorithm. It can compute either an invariant subspace of a nonsymmetric matrix \(A\), or a pair of left and right deflating subspaces of a regular matrix pencil \(A - \lambda B\). This algorithm is based on earlier ones of Bulgakov, Godunov and Malyshev, but improves on them in several ways. This algorithm only uses easily parallelizable linear algebra building blocks: matrix multiplication and QR decomposition, but not matrix inversion. Similar parallel algorithms for the nonsymmetric eigenproblem use the matrix sign function, which requires matrix inversion and is faster but can be less stable than the new algorithm.

145 citations

References
More filters
Book
01 Jan 1965
TL;DR: Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography.
Abstract: Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography Index.

7,422 citations

Book
01 Jan 1979

2,694 citations

Journal ArticleDOI
TL;DR: The sign function of a square matrix can be defined in terms of a contour integral or as the result of an iterated map as discussed by the authors, which enables a matrix to be decomposed into two components whose spectra lie on opposite sides of the imaginary axis.
Abstract: The sign function of a square matrix can be defined in terms of a contour integral or as the result of an iterated map $. Application of this function enables a matrix to be decomposed into two components whose spectra lie on opposite sides of the imaginary axis. This has application in reduction of linear systems to lower order models and in the solution of the matrix Lyapunov and algebraic Riccati equations.

430 citations

Journal ArticleDOI
TL;DR: New algorithms, based on the matrix sign function, for the solution of algebraic matrix Riccati equations, Lyapunov equations, coupled Riccatis equations, spectral factorization, matrix square roots, pole assignment, and the algebraic eigenvalue-eigenvector problem are presented.

267 citations

Journal ArticleDOI
TL;DR: In this paper, a cubically convergent algorithm for the simultaneous approximation of all zeros of a polynomial is presented, where several zeros are missing and the values of some derivatives of the logarithmic derivative are known.
Abstract: Suppose all zeros of a polynomialp but one are known to lie in specified circular regions, and the value of the logarithmic derivativep?p ?1 is known at a point. What can be said about the location of the remaining zero? This question is answered in the present paper, as well as its generalization where several zeros are missing and the values of some derivatives of the logarithmic derivative are known. A connection with a classical result due to Laguerre is established, and an application to the problem of locating zeros of certain transcendental functions is given. The results are used to construct (i) a version of Newton's method with error bounds, (ii) a cubically convergent algorithm for the simultaneous approximation of all zeros of a polynomial. The algorithms and their theoretical foundation make use of circular arithmetic, an extension, based on the theory of Moebius transformations, of interval arithmetic from the real line to the extended complex plane.

229 citations