scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 2006"


Journal ArticleDOI
TL;DR: This paper derives a new and relatively weak deterministic sufficient condition for uniqueness in the decomposition of higher-order tensors which have the property that the rank is smaller than the greatest dimension.
Abstract: Canonical decomposition is a key concept in multilinear algebra. In this paper we consider the decomposition of higher-order tensors which have the property that the rank is smaller than the greatest dimension. We derive a new and relatively weak deterministic sufficient condition for uniqueness. The proof is constructive. It shows that the canonical components can be obtained from a simultaneous matrix diagonalization by congruence, yielding a new algorithm. From the deterministic condition we derive an easy-to-check dimensionality condition that guarantees generic uniqueness.

435 citations


Journal ArticleDOI
TL;DR: This work develops a systematic approach to generating large classes of linearizations for a matrix polynomial, and shows how to simply construct two vector spaces of pencils that generalize the companion forms of $P, and proves that almost all of these pencils are linearized for $P$.
Abstract: The classical approach to investigating polynomial eigenvalue problems is linearization, where the polynomial is converted into a larger matrix pencil with the same eigenvalues. For any polynomial there are infinitely many linearizations with widely varying properties, but in practice the companion forms are typically used. However, these companion forms are not always entirely satisfactory, and linearizations with special properties may sometimes be required. Given a matrix polynomial $P$, we develop a systematic approach to generating large classes of linearizations for $P$. We show how to simply construct two vector spaces of pencils that generalize the companion forms of $P$, and prove that almost all of these pencils are linearizations for $P$. Eigenvectors of these pencils are shown to be closely related to those of $P$. A distinguished subspace is then isolated, and the special properties of these pencils are investigated. These spaces of pencils provide a convenient arena in which to look for structured linearizations of structured polynomials, as well as to try to optimize the conditioning of linearizations.

333 citations


Journal ArticleDOI
TL;DR: This paper analyzes the existence and uniqueness of a special class of linearizations which reflect the structure of structured polynomials, and therefore preserve symmetries in their spectra, and shows how they may be systematically constructed.
Abstract: Many applications give rise to nonlinear eigenvalue problems with an underlying structured matrix polynomial. In this paper several useful classes of structured polynomials (e.g., palindromic, even, odd) are identified and the relationships between them explored. A special class of linearizations which reflect the structure of these polynomials, and therefore preserve symmetries in their spectra, is introduced and investigated. We analyze the existence and uniqueness of such linearizations and show how they may be systematically constructed.

296 citations


Journal ArticleDOI
TL;DR: The quadratic convergence of the proposed Newton method for the nearest correlation matrix problem is proved, which confirms the fast convergence and the high efficiency of the method.
Abstract: The nearest correlation matrix problem is to find a correlation matrix which is closest to a given symmetric matrix in the Frobenius norm. The well-studied dual approach is to reformulate this problem as an unconstrained continuously differentiable convex optimization problem. Gradient methods and quasi-Newton methods such as BFGS have been used directly to obtain globally convergent methods. Since the objective function in the dual approach is not twice continuously differentiable, these methods converge at best linearly. In this paper, we investigate a Newton-type method for the nearest correlation matrix problem. Based on recent developments on strongly semismooth matrix valued functions, we prove the quadratic convergence of the proposed Newton method. Numerical experiments confirm the fast convergence and the high efficiency of the method.

288 citations


Journal ArticleDOI
TL;DR: An algebraic representation that is useful for matrices with off-diagonal blocks of low numerical rank and a fast and stable solver for linear systems of equations in which the coefficient matrix has this representation is presented.
Abstract: We consider an algebraic representation that is useful for matrices with off-diagonal blocks of low numerical rank. A fast and stable solver for linear systems of equations in which the coefficient matrix has this representation is presented. We also present a fast algorithm to construct the hierarchically semiseparable representation in the general case.

254 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the smoothed precision required to solve Ax = b, for any b, using Gaussian elimination without pivoting is logarithmic.
Abstract: Let A be an arbitrary matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has a large condition number. Using this result, we prove that it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we show that the smoothed precision necessary to solve Ax = b, for any b, using Gaussian elimination without pivoting is logarithmic. Moreover, when A is an all-zero square matrix, our results significantly improve the average-case analysis of Gaussian elimination without pivoting performed by Yeung and Chan (SIAM J. Matrix Anal. Appl., 18 (1997), pp. 499-517).

219 citations


Journal ArticleDOI
TL;DR: A fast direct solver for certain classes of dense structured linear systems that works by first converting the given dense system to a larger system of block sparse equations and then uses standard sparse direct solvers.
Abstract: In this paper we present a fast direct solver for certain classes of dense structured linear systems that works by first converting the given dense system to a larger system of block sparse equations and then uses standard sparse direct solvers. The kind of matrix structures that we consider are induced by numerical low rank in the off-diagonal blocks of the matrix and are related to the structures exploited by the fast multipole method (FMM) of Greengard and Rokhlin. The special structure that we exploit in this paper is captured by what we term the hierarchically semiseparable (HSS) representation of a matrix. Numerical experiments indicate that the method is probably backward stable.

155 citations


Journal ArticleDOI
TL;DR: This work provides a self-contained treatment of some of the key properties of $\mathbb{DL}(P)$ together with some new, more concise proofs.
Abstract: A standard way of treating the polynomial eigenvalue problem $P(\lambda)x = 0$ is to convert it into an equivalent matrix pencil—a process known as linearization. Two vector spaces of pencils $\mathbb{L}_1(P)$ and $\mathbb{L}_2(P)$, and their intersection $\mathbb{DL}(P)$, have recently been defined and studied by Mackey, Mackey, Mehl, and Mehrmann. The aim of our work is to gain new insight into these spaces and the extent to which their constituent pencils inherit structure from $P$. For arbitrary polynomials we show that every pencil in $\mathbb{DL}(P)$ is block symmetric and we obtain a convenient basis for $\mathbb{DL}(P)$ built from block Hankel matrices. This basis is then exploited to prove that the first $\deg(P)$ pencils in a sequence constructed by Lancaster in the 1960s generate $\mathbb{DL}(P)$. When $P$ is symmetric, we show that the symmetric pencils in $\mathbb{L}_1(P)$ comprise $\mathbb{DL}(P)$, while for Hermitian $P$ the Hermitian pencils in $\mathbb{L}_1(P)$ form a proper subset of $\mathbb{DL}(P)$ that we explicitly characterize. Almost all pencils in each of these subsets are shown to be linearizations. In addition to obtaining new results, this work provides a self-contained treatment of some of the key properties of $\mathbb{DL}(P)$ together with some new, more concise proofs.

147 citations


Journal ArticleDOI
TL;DR: This work investigates the conditioning of linearizations from a vector space $\mathbb{DL}(P)$ of pencils recently identified and studied and analyzes the eigenvalue conditioning of the widely used first and second companion linearizations.
Abstract: The standard way of solving the polynomial eigenvalue problem of degree $m$ in $n\times n$ matrices is to “linearize” to a pencil in $mn\times mn$ matrices and solve the generalized eigenvalue problem. For a given polynomial, $P$, infinitely many linearizations exist and they can have widely varying eigenvalue condition numbers. We investigate the conditioning of linearizations from a vector space $\mathbb{DL}(P)$ of pencils recently identified and studied by Mackey, Mackey, Mehl, and Mehrmann. We look for the best conditioned linearization and compare the conditioning with that of the original polynomial. Two particular pencils are shown always to be almost optimal over linearizations in $\mathbb{DL}(P)$ for eigenvalues of modulus greater than or less than 1, respectively, provided that the problem is not too badly scaled and that the pencils are linearizations. Moreover, under this scaling assumption, these pencils are shown to be about as well conditioned as the original polynomial. For quadratic eigenvalue problems that are not too heavily damped, a simple scaling is shown to convert the problem to one that is well scaled. We also analyze the eigenvalue conditioning of the widely used first and second companion linearizations. The conditioning of the first companion linearization relative to that of $P$ is shown to depend on the coefficient matrix norms, the eigenvalue, and the left s of the linearization and of $P$. The companion form is found to be potentially much more ill conditioned than $P$, but if the 2-norms of the coefficient matrices are all approximately 1 then the companion form and $P$ are guaranteed to have similar condition numbers. Analogous results hold for the second companion form. Our results are phrased in terms of both the standard relative condition number and the condition number of Dedieu and Tisseur [Linear Algebra Appl., 358 (2003), pp. 71-94] for the problem in homogeneous form, this latter condition number having the advantage of applying to zero and infinite eigenvalues.

136 citations


Journal ArticleDOI
TL;DR: A unified convergence theory for the structure-preserving doubling algorithms for a class of Riccati-type matrix equations is established, using only elementary matrix theory.
Abstract: In this paper, we introduce the doubling transformation, a structure-preserving transformation for symplectic pencils, and present its basic properties. Based on these properties, a unified convergence theory for the structure-preserving doubling algorithms for a class of Riccati-type matrix equations is established, using only elementary matrix theory.

121 citations


Journal ArticleDOI
TL;DR: This work shows that MGS-GMRES is backward stable, and uses other new results on MGS and its loss of orthogonality, together with an important but neglected condition number, and a relation between residual norms and certain singular values.
Abstract: The generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856-869] for solving linear systems Ax=b is implemented as a sequence of least squares problems involving Krylov subspaces of increasing dimensions. The most usual implementation is modified Gram-Schmidt GMRES (MGS-GMRES). Here we show that MGS-GMRES is backward stable. The result depends on a more general result on the backward stability of a variant of the MGS algorithm applied to solving a linear least squares problem, and uses other new results on MGS and its loss of orthogonality, together with an important but neglected condition number, and a relation between residual norms and certain singular values.

Journal ArticleDOI
TL;DR: A hybrid algorithm is developed that computes a Schur decomposition, takes square roots of the upper (quasi-) triangular factor, and applies the coupled Newton iteration to a matrix for which fast convergence is guaranteed.
Abstract: Newton’s method for the inverse matrix $p$th root, $A^{-1/p}$, has the attraction that it involves only matrix multiplication. We show that if the starting matrix is $c^{-1}I$ for $c\in\mathbb{R}^+$ then the iteration converges quadratically to $A^{-1/p}$ if the eigenvalues of $A$ lie in a wedge-shaped convex set containing the disc $\{z: |z-c^p| < c^p\}$. We derive an optimal choice of $c$ for the case where $A$ has real, positive eigenvalues. An application is described to roots of transition matrices from Markov models, in which for certain problems the convergence condition is satisfied with $c=1$. Although the basic Newton iteration is numerically unstable, a coupled version is stable and a simple modification of it provides a new coupled iteration for the matrix $p$th root. For general matrices we develop a hybrid algorithm that computes a Schur decomposition, takes square roots of the upper (quasi-) triangular factor, and applies the coupled Newton iteration to a matrix for which fast convergence is guaranteed. The new algorithm can be used to compute either $A^{1/p}$ or $A^{-1/p}$, and for large $p$ that are not highly composite it is more efficient than the method of Smith based entirely on the Schur decomposition.

Journal ArticleDOI
TL;DR: It is proved that a global optimal solution to this problem can be found by solving a sequence of very simple convex minimization problems parameterized by a single parameter.
Abstract: We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a two-sided quadratic form constraint. This formulation is motivated by the so-called regularized total least squares (RTLS) problem. A key difficulty with this problem is its nonconvexity, and all current known methods to solve it are guaranteed only to converge to a point satisfying first order necessary optimality conditions. We prove that a global optimal solution to this problem can be found by solving a sequence of very simple convex minimization problems parameterized by a single parameter. As a result, we derive an efficient algorithm that produces an $\epsilon$-global optimal solution in a computational effort of $O(n^3 \log \epsilon^{-1})$. The algorithm is tested on problems arising from the inverse Laplace transform and image deblurring. Comparison to other well-known RTLS solvers illustrates the attractiveness of our new method.

Journal ArticleDOI
TL;DR: An algorithm based on methods of Gu and Burke-Lewis-Overton that estimates the distance to uncontrollability to any prescribed accuracy is presented, which is an improvement over previous methods which have complexity O(n6), where n is the order of the system.
Abstract: The distance to uncontrollability for a linear control system is the distance (in the 2-norm) to the nearest uncontrollable system. We present an algorithm based on methods of Gu and Burke-Lewis-Overton that estimates the distance to uncontrollability to any prescribed accuracy. The new method requires O(n4) operations on average, which is an improvement over previous methods which have complexity O(n6), where n is the order of the system. Numerical experiments indicate that the new method is reliable in practice.

Journal ArticleDOI
TL;DR: It is proved that Newton's iteration for computing the principal matrix pth root A1/p of an n x n matrix A converges for any matrix A having eigenvalues with modulus less than 1 and with positive real parts.
Abstract: Stable versions of Newton's iteration for computing the principal matrix pth root A1/p of an n x n matrix A are provided. In the case in which X0 is the identity matrix, it is proved that the method converges for any matrix A having eigenvalues with modulus less than 1 and with positive real parts. Based on these results we provide a general algorithm for computing the principal pth root of any matrix A having no nonpositive real eigenvalues. The algorithm has quadratic convergence, is stable in a neighborhood of the solution, and has a cost of O(n3 log p) operations per step. Numerical experiments and comparisons are performed.

Journal ArticleDOI
TL;DR: This work will generalize the recently proposed Quadratic Extrapolation and interpret it on the basis of the method of moments of Vorobyev, and as a Krylov subspace method, and give expressions of the PageRank vector and study the mathematical properties of the power method.
Abstract: An important problem in Web search is determining the importance of each page. After introducing the main characteristics of this problem, we will see that, from the mathematical point of view, it could be solved by computing the left principal eigenvector (the PageRank vector) of a matrix related to the structure of the Web by using the power method. We will give expressions of the PageRank vector and study the mathematical properties of the power method. Various Pade-style approximations of the PageRank vector will be given. Since the convergence of the power method is slow, it has to be accelerated. This problem will be discussed. Recently, several acceleration methods were proposed. We will give a theoretical justification for these methods. In particular, we will generalize the recently proposed Quadratic Extrapolation, and we interpret it on the basis of the method of moments of Vorobyev, and as a Krylov subspace method. Acceleration results are given for the various epsilon-algorithms, and for the E-algorithm. Other algorithms for this problem are also discussed.

Journal ArticleDOI
TL;DR: This paper investigates the effect of structure-preserving perturbations on the eigenvalues of linearly and nonlinearly structured eigenvalue problems, and shows that under reasonable assumptions on the scalar product, the structured and unstructured eigen value condition numbers are equal for structures in Jordan algebras.
Abstract: This paper investigates the effect of structure-preserving perturbations on the eigenvalues of linearly and nonlinearly structured eigenvalue problems. Particular attention is paid to structures that form Jordan algebras, Lie algebras, and automorphism groups of a scalar product. Bounds and computable expressions for structured eigenvalue condition numbers are derived for these classes of matrices, which include complex symmetric, pseudo-symmetric, persymmetric, skew-symmetric, Hamiltonian, symplectic, and orthogonal matrices. In particular we show that under reasonable assumptions on the scalar product, the structured and unstructured eigenvalue condition numbers are equal for structures in Jordan algebras. For Lie algebras, the effect on the condition number of incorporating structure varies greatly with the structure. We identify Lie algebras for which structure does not affect the eigenvalue condition number.

Journal ArticleDOI
TL;DR: This paper derives sufficient conditions such that the blocks of the n-step block tridiagonal transition matrix of the Markov chain can be represented as integrals with respect to a matrix valued spectral measure.
Abstract: In this paper we study the connection between matrix measures and random walks with a block tridiagonal transition matrix. We derive sufficient conditions such that the blocks of the n-step block tridiagonal transition matrix of the Markov chain can be represented as integrals with respect to a matrix valued spectral measure. Several stochastic properties of the processes are characterized by means of this matrix measure. In many cases this measure is supported in the interval [-1,1]. The results are illustrated by several examples including random walks on a grid and the embedded chain of a queuing system.

Journal ArticleDOI
TL;DR: A number of families of implicit factorizations are constructed that are capable of reproducing the required sub-blocks and (some) of the remainder of the remaining blocks in the conjugate-gradient like methods for solving block symmetric indefinite linear systems that arise from saddle-point problems or, in particular, regularizations thereof.
Abstract: We consider conjugate-gradient like methods for solving block symmetric indefinite linear systems that arise from saddle-point problems or, in particular, regularizations thereof. Such methods require preconditioners that preserve certain sub-blocks from the original systems but allow considerable flexibility for the remaining blocks. We construct a number of families of implicit factorizations that are capable of reproducing the required sub-blocks and (some) of the remainder. These generalize known implicit factorizations for the unregularized case. Improved eigenvalue clustering is possible if additionally some of the noncrucial blocks are reproduced. Numerical experiments confirm that these implicit-factorization preconditioners can be very effective in practice.

Journal ArticleDOI
TL;DR: The result for the squares of cosines can be viewed as a bound on the change in the Ritz values of an orthogonal projector, and the conjecture that the root two factor in the earlier estimate may be eliminated is eliminated.
Abstract: Many inequality relations between real vector quantities can be succinctly expressed as “weak (sub)majorization” relations using the symbol ${\prec}_{w}$. We explain these ideas and apply them in several areas, angles between subspaces, Ritz values, and graph Laplacian spectra, which we show are all surprisingly related. Let $\Theta({\mathcal X},{\mathcal Y})$ be the vector of principal angles in nondecreasing order between subspaces ${\mathcal X}$ and ${\mathcal Y}$ of a finite dimensional space ${\mathcal H}$ with a scalar product. We consider the change in principal angles between subspaces ${\mathcal X}$ and ${\mathcal Z}$, where we let ${\mathcal X}$ be perturbed to give ${\mathcal Y}$. We measure the change using weak majorization. We prove that $|\cos^2\Theta({\mathcal X},{\mathcal Z})-\cos^2\Theta({\mathcal Y},{\mathcal Z})| {\prec}_{w} \sin\Theta({\mathcal X},{\mathcal Y})$, and give similar results for differences of cosines, i.e., $|\cos\Theta({\mathcal X},{\mathcal Z})-\cos\Theta({\mathcal Y},{\mathcal Z})| {\prec}_{w} \sin\Theta({\mathcal X},{\mathcal Y})$, and of sines and sines squared, assuming $\dim {\mathcal X} = \dim {\mathcal Y}$. We observe that $\cos^2\Theta({\mathcal X},{\mathcal Z})$ can be interpreted as a vector of Ritz values, where the Rayleigh-Ritz method is applied to the orthogonal projector on ${\mathcal Z}$ using ${\mathcal X}$ as a trial subspace. Thus, our result for the squares of cosines can be viewed as a bound on the change in the Ritz values of an orthogonal projector. We then extend it to prove a general result for Ritz values for an arbitrary Hermitian operator $A$, not necessarily a projector: let $\Lambda(P_{{\mathcal X}}A|_{{\mathcal X}})$ be the vector of Ritz values in nonincreasing order for $A$ on a trial subspace ${\mathcal X}$, which is perturbed to give another trial subspace ${\mathcal Y}$; then $| \Lambda(P_{{\mathcal X}}A|_{{\mathcal X}})- \Lambda(P_{{\mathcal Y}}A|_{{\mathcal Y}})|\prec_w (\lmax-\lmin)~\sin\Theta({\mathcal X},{\mathcal Y})$, where the constant is the difference between the largest and the smallest eigenvalues of $A$. This establishes our conjecture that the root two factor in our earlier estimate may be eliminated. Our present proof is based on a classical but rarely used technique of extending a Hermitian operator in ${\mathcal H}$ to an orthogonal projector in the “double” space ${\mathcal H}^2$. An application of our Ritz values weak majorization result for Laplacian graph spectra comparison is suggested, based on the possibility of interpreting eigenvalues of the edge Laplacian of a given graph as Ritz values of the edge Laplacian of the complete graph. We prove that $|\cos^2\Theta({\mathcal X},{\mathcal Z})-\cos^2\Theta({\mathcal Y},{\mathcal Z})| {\prec}_{w} \sin\Theta({\mathcal X},{\mathcal Y})$ where $\lambda^1_k$ and $\lambda^2_k$ are all ordered elements of the Laplacian spectra of two graphs with the same $n$ vertices and with $l$ equal to the number of differing edges.

Journal ArticleDOI
TL;DR: This paper considers how to choose an Hermitian or skew-Hermitian matrix X such that A - BX \pm X^*B^*$ attain the minimal possible ranks.
Abstract: Suppose $A - BXB^*$, $A - BX - X^*B^*$, and $A - BX + X^*B^*$ are three linear matrix expressions over the field of complex numbers, where $A$ is an Hermitian or skew-Hermitian matrix. In this paper, we consider how to choose an Hermitian or skew-Hermitian matrix $X$ such that $A - BXB^*$ have the maximal and minimal possible ranks, and how to choose $X$ such that $A - BX \pm X^*B^*$ attain the minimal possible ranks. Some applications to Hermitian or skew-Hermitian solutions of matrix equations with symmetric patterns are also given.

Journal ArticleDOI
TL;DR: A new technique is developed which works for any smoothing norm of the form $\|L\,x\|_2$ and which preserves symmetry if the coefficient matrix is symmetric.
Abstract: When GMRES (or a similar minimum-residual algorithm such as RRGMRES, MINRES, or MR-II) is applied to a discrete ill-posed problem with a square matrix, in some cases the iterates can be considered as regularized solutions. We show how to precondition these methods in such a way that the iterations take into account a smoothing norm for the solution. This technique is well established for CGLS, but it does not immediately carry over to minimum-residual methods when the smoothing norm is a seminorm or a Sobolev norm. We develop a new technique which works for any smoothing norm of the form $\|L\,x\|_2$ and which preserves symmetry if the coefficient matrix is symmetric. We also discuss the efficient implementation of our preconditioning technique, and we demonstrate its performance with numerical examples in one and two dimensions.

Journal ArticleDOI
TL;DR: An appropriate condition number for invariant subspaces subject to structured perturbations of structured matrices is proposed, which extends naturally to structured generalized eigenvalue problems such as palindromic matrix pencils.
Abstract: Invariant subspaces of structured matrices are sometimes better conditioned with respect to structured perturbations than with respect to general perturbations. Sometimes they are not. This paper proposes an appropriate condition number $c_{\S}$, for invariant subspaces subject to structured perturbations. Several examples compare $c_{\S}$ with the unstructured condition number. The examples include block cyclic, Hamiltonian, and orthogonal matrices. This approach extends naturally to structured generalized eigenvalue problems such as palindromic matrix pencils.

Journal ArticleDOI
TL;DR: An a priori bound is derived for the AMLS approximation of eigenvalues by Rewriting the original problem as a rational eigenproblem of the same dimension as the projected problem and taking advantage of a minmax characterization for the rational Eigenproblem.
Abstract: The automated multilevel substructuring (AMLS) method has been developed to reduce the computational demands of frequency response analysis and has recently been proposed as an alternative to iterative projection methods like those of Lanczos or Jacobi--Davidson for computing a large number of eigenvalues for matrices of very large dimension. Based on Schur complements and modal approximations of submatrices on several levels, AMLS constructs a projected eigenproblem which yields good approximations of eigenvalues at the lower end of the spectrum. Rewriting the original problem as a rational eigenproblem of the same dimension as the projected problem and taking advantage of a minmax characterization for the rational eigenproblem, we derive an a priori bound for the AMLS approximation of eigenvalues.

Journal ArticleDOI
TL;DR: Using various matrix decompositions, a general solution to the inverse quadratic eigenvalue problem of constructing real symmetric matrices is constructed and several particular solutions with additional eigeninformation or special properties are constructed.
Abstract: Given $k$ pairs of complex numbers and vectors (closed under conjugation), we consider the inverse quadratic eigenvalue problem of constructing $n\times n$ real symmetric matrices $M$, $C$, and $K$ (with $M$ positive definite) so that the quadratic pencil $Q(\lambda)\equiv \lambda^2M+\lambda C+K$ has the given $k$ pairs as eigenpairs. Using various matrix decompositions, we first construct a general solution to this problem with $k\le n$. Then, with appropriate choices of degrees of freedom in the general solution, we construct several particular solutions with additional eigeninformation or special properties. Numerical results illustrating these solutions are also presented.

Journal ArticleDOI
TL;DR: An extension of the small-bulge multishift QR algorithm is developed, which chases chains of many small bulges instead of only one bulge in each QZ iteration, which allows the effective use of level 3 BLAS operations, which in turn can provide efficient utilization of high performance computing systems with deep memory hierarchies.
Abstract: New variants of the QZ algorithm for solving the generalized eigenvalue problem are proposed. An extension of the small-bulge multishift QR algorithm is developed, which chases chains of many small bulges instead of only one bulge in each QZ iteration. This allows the effective use of level 3 BLAS operations, which in turn can provide efficient utilization of high performance computing systems with deep memory hierarchies. Moreover, an extension of the aggressive early deflation strategy is proposed, which can identify and deflate converged eigenvalues long before classic deflation strategies would. Consequently, the number of overall QZ iterations needed until convergence is considerably reduced. As a third ingredient, we reconsider the deflation of infinite eigenvalues and present a new deflation algorithm, which is particularly effective in the presence of a large number of infinite eigenvalues. Combining all these developments, our implementation significantly improves existing implementations of the QZ algorithm. This is demonstrated by numerical experiments with random matrix pairs as well as with matrix pairs arising from various applications.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an unsymmetric variant of the reverse Cuthill-McKee algorithm to reduce the total bandwidth of a sparse symmetric matrix.
Abstract: For a sparse symmetric matrix, there has been much attention given to algorithms for reducing the bandwidth. As far as we can see, little has been done for the unsymmetric matrix $A$, which has distinct lower and upper bandwidths $l$ and $u$. When Gaussian elimination with row interchanges is applied, the lower bandwidth is unaltered, while the upper bandwidth becomes $l+u$. With column interchanges, the upper bandwidth is unaltered, while the lower bandwidth becomes $l+u$. We therefore seek to reduce $\min (l,u)+l+u$, which we call the total bandwidth. We compare applying the reverse Cuthill-McKee algorithm to $A+A^T$, to the row graph of $A$, and to the bipartite graph of $A$. We also propose an unsymmetric variant of the reverse Cuthill-McKee algorithm. In addition, we have adapted the node-centroid and hill-climbing ideas of Lim, Rodrigues, and Xiao to the unsymmetric case. We have found that using these to refine a Cuthill-McKee-based ordering can give significant further bandwidth reductions. Numerical results for a range of practical problems are presented and comparisons made with the recent lexicographical method of Baumann, Fleischmann, and Mutzbauer.

Journal ArticleDOI
TL;DR: Necessary and sufficient conditions for the energy norm convergence of the classical iterative methods for semidefinite linear systems are obtained and generalize the classic notion of the P-regularity introduced by Keller.
Abstract: Necessary and sufficient conditions for the energy norm convergence of the classical iterative methods for semidefinite linear systems are obtained in this paper. These new conditions generalize the classic notion of the P-regularity introduced by Keller [J. Soc. Indust. Appl. Math. Ser. B Numer. Anal., 2 (1965), pp. 281-290].

Journal ArticleDOI
TL;DR: The methods are iterative in nature and utilize alternating projection ideas, and the main computational component of each iteration is an eigenvalue-eigenvector decomposition, while for the other algorithm, it is a Schur matrix decomposition.
Abstract: Presented here are two related numerical methods, one for the inverse eigenvalue problem for nonnegative or stochastic matrices and another for the inverse eigenvalue problem for symmetric nonnegative matrices. The methods are iterative in nature and utilize alternating projection ideas. For the algorithm for the symmetric problem, the main computational component of each iteration is an eigenvalue-eigenvector decomposition, while for the other algorithm, it is a Schur matrix decomposition. Convergence properties of the algorithms are investigated and numerical results are also presented. While the paper deals with two specific types of inverse eigenvalue problems, the ideas presented here should be applicable to many other inverse eigenvalue problems, including those involving nonsymmetric matrices.

Journal ArticleDOI
TL;DR: A convergence result for the variable shift case is proved which extends current results for the case of a fixed shift and the approach from [V. Simoncini and L. Elde´n, BIT, 42 (2002), pp. 159-182] to modify the right-hand side when using preconditioned solves is considered.
Abstract: In this paper we analyze inexact inverse iteration for the nonsymmetric generalized eigenvalue problem $\bf{A}\bf{x} = \lambda \bf{M}\bf{x}$, where $\bf{M}$ is symmetric positive definite and the problem is diagonalizable. Our analysis is designed to apply to the case when $\bf{A}$ and $\bf{M}$ are large and sparse and preconditioned iterative methods are used to solve shifted linear systems with coefficient matrix $\bf{A}-\sigma \bf{M}$. We prove a convergence result for the variable shift case (for example, where the shift is the Rayleigh quotient) which extends current results for the case of a fixed shift. Additionally, we consider the approach from [V. Simoncini and L. Elde´n, BIT, 42 (2002), pp. 159-182] to modify the right-hand side when using preconditioned solves. Several numerical experiments are presented that illustrate the theory and provide a basis for the discussion of practical issues.