scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Finding a positive definite linear combination of two Hermitian matrices

TL;DR: In this paper, a linear combination of n -by-n Hermitian matrices over the complex field was shown to be possible in a finite number of steps, where A and B are real symmetric sparse matrices.
About: This article is published in Linear Algebra and its Applications.The article was published on 1983-06-01 and is currently open access. It has received 31 citations till now. The article focuses on the topics: Hermitian matrix & Linear combination.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the canonical forms under congruence for pairs of complex or real symmetric or skew matrices are established for skew matrix pairs, and the treatment is in the spirit of the well-known book of Gantmacher on matrix theory.

216 citations

Journal ArticleDOI
TL;DR: The distance to the nearest non-hyperbolic or non-elliptic n x n QEP is shown to be the solution of a global minimization problem with n - 1 dearees of freedom.

68 citations

Journal ArticleDOI
TL;DR: A survey of the literature on the theory of and numerical methods for linear pencils can be found in this article, where the authors survey nearly all of the publications that have appeared in the last twenty years.
Abstract: This paper surveys nearly all of the publications that have appeared in the last twenty years on the theory of and numerical methods for linear pencils. The survey is divided into the following sections: theory of canonical forms for symmetric and Hermitian pencils and the associated problem of simultaneous reduction of pairs of quadratic forms to canonical form; results on perturbation of characteristic values and deflating subspaces; numerical methods. The survey is self-contained in the sense that it includes the necessary information from the elementary theory of pencils and the theory of perturbations for the common algebraic problem Ax=λx.

42 citations

Journal ArticleDOI
TL;DR: An efficient algorithm is obtained that identifies and solves a hyperbolic or overdamped QEP maintaining symmetry throughout and guaranteeing real computed eigenvalues.
Abstract: Hyperbolic quadratic matrix polynomials $Q(\lambda) = \lambda^2 A + \lambda B + C$ are an important class of Hermitian matrix polynomials with real eigenvalues, among which the overdamped quadratics are those with nonpositive eigenvalues. Neither the definition of overdamped nor any of the standard characterizations provides an efficient way to test if a given $Q$ has this property. We show that a quadratically convergent matrix iteration based on cyclic reduction, previously studied by Guo and Lancaster, provides necessary and sufficient conditions for $Q$ to be overdamped. For weakly overdamped $Q$ the iteration is shown to be generically linearly convergent with constant at worst $1/2$, which implies that the convergence of the iteration is reasonably fast in almost all cases of practical interest. We show that the matrix iteration can be implemented in such a way that when overdamping is detected a scalar $\mu<0$ is provided that lies in the gap between the $n$ largest and $n$ smallest eigenvalues of the $n \times n$ quadratic eigenvalue problem (QEP) $Q(\lambda)x = 0$. Once such a $\mu$ is known, the QEP can be solved by linearizing to a definite pencil that can be reduced, using already available Cholesky factorizations, to a standard Hermitian eigenproblem. By incorporating an initial preprocessing stage that shifts a hyperbolic $Q$ so that it is overdamped, we obtain an efficient algorithm that identifies and solves a hyperbolic or overdamped QEP maintaining symmetry throughout and guaranteeing real computed eigenvalues.

35 citations


Cites methods from "Finding a positive definite linear ..."

  • ...Since our algorithm needs 203 n 3 flops per iteration, it is often more efficient than the Crawford–Moon algorithm applied to the pair (A,B) and is often less efficient than the Crawford–Moon algorithm working on Q via the congruence....

    [...]

  • ...For easy problems, the Crawford–Moon algorithm needs about 3 iterations, while our algorithm needs 0 or 1 iterations....

    [...]

  • ...The second approach is to apply to (A,B) an algorithm of Crawford and Moon [4] for detecting definiteness of Hermitian matrix pairs....

    [...]

  • ...We thank Qiang Ye for helpful comments concerning the algorithm of Crawford and Moon....

    [...]

  • ...Since the Crawford–Moon algorithm requires one Cholesky factorization per iteration, here of a 2n× 2n matrix, it needs 83n3 flops per iteration, and this can be reduced to 13n 3 flops per iteration by working directly with the n × n quadratic Q through the use of a congruence transformation, as given in the proof of [19, Theorem 3.6], for example....

    [...]

Journal ArticleDOI
TL;DR: The spectral properties of Hermitian matrix polynomials with real eigenvalues have been extensively studied, through classes such as definite or defineable pencils, definite, hyperbolic, or quasihyperbolic matrix polynomials, and overdamped or gyroscopically stabilized quadratics as discussed by the authors.

33 citations


Cites methods from "Finding a positive definite linear ..."

  • ...As an alternative, the recently improved arc algorithm of Crawford and Moon [4,12] efficiently detects whether λA − B is definite and determines μ such that L(μ) > 0 at the cost of just a few Cholesky factorizations....

    [...]

References
More filters
Book
11 Jun 1973
TL;DR: Rounding-Error Analysis of Solution of Triangular Systems and of Gaussian Elimination.
Abstract: Preliminaries Practicalities The Direct Solution of Linear Systems Norms, Limits, and Condition Numbers The Linear Least Squares Problem Eigenvalues and Eigenvectors The QR Algorithm The Greek Alphabet and Latin Notational Correspondents Determinants Rounding-Error Analysis of Solution of Triangular Systems and of Gaussian Elimination Of Things Not Treated Bibliography Index

2,040 citations

Journal ArticleDOI
TL;DR: The use of least-squares techniques for this and G. W. Stewart, LINPACK Users' Guide for Intel® Math Kernel Library 11.3 for Linux* OS are provided.
Abstract: We provide further discussion of the use of least-squares techniques for this and G. W. Stewart, LINPACK Users' Guide (Society of Industrial and Applied. User's Guide for Intel® Math Kernel Library 11.3 for Linux* OS. Revision: Benchmark your cluster with Intel® Optimized MP LINPACK Benchmark for Clusters. Running Linpack on Linux High performance computer. No problem. I need a working guide for hpl if someone can do help me at this. The errors are.

671 citations

Journal ArticleDOI
TL;DR: In this paper, the sensitivity of the eigenvalues and eigenvectors of the generalized matrix eigenvalue problem to perturbations of A and B is investigated, and error bounds for approximate deflating subspaces obtained.
Abstract: This paper considers the sensitivity of the eigenvalues and eigenvectors of the generalized matrix eigenvalue problem $Ax = \lambda Bx$ to perturbations of A and B. The notion of a deflating subspace for the problem is introduced, and error bounds for approximate deflating subspaces obtained. The bounds also provide information about the eigenvalues of the problem. The resulting perturbation bounds estimate realistically the sensitivity of the eigenvalues, even when B is singular or nearly singular. The results are applied to the important special case where A is Hermitian and B is positive definite.

129 citations

Journal ArticleDOI
TL;DR: In this paper, the eigenvalue problem Ax = λBx is shown to have a complete system of eigenvectors and that its eigenvalues are real.

98 citations

Journal ArticleDOI
TL;DR: The method is a generalization of Rutishauser’s $LR$-method for the standard eigenvalue problem and closely resembles the $QZ$-algorithm given by Moler and Stewart for the generalized problem given above.
Abstract: In this paper, we will present and analyze an algorithm for finding ${\bf x}$ and $\lambda$ such that \[ A{\bf x} = \lambda B{\bf x},\] where A and B are $n \times n$ matrices. The algorithm does not require matrix inversion, and may be used when either or both matrices are singular. Our method is a generalization of Rutishauser’s $LR$-method [20] for the standard eigenvalue problem $A{\bf x} = \lambda {\bf x}$ and closely resembles the $QZ$-algorithm given by Moler and Stewart [13] for the generalized problem given above. Unlike the $QZ$-algorithm, which uses orthogonal transformations, our method, the $LZ$-algorithm, uses elementary transformations. When either A or B is complex, our method should be more efficient.

62 citations