scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 2009"


Journal ArticleDOI
TL;DR: It is shown that problem structure in the semidefinite programming formulation can be exploited to develop more efficient implementations of interior-point methods, and a variant of a simple subspace algorithm is presented in which low-rank matrix approximations are computed via nuclear norm minimization instead of the singular value decomposition.
Abstract: The nuclear norm (sum of singular values) of a matrix is often used in convex heuristics for rank minimization problems in control, signal processing, and statistics. Such heuristics can be viewed as extensions of $\ell_1$-norm minimization techniques for cardinality minimization and sparse signal estimation. In this paper we consider the problem of minimizing the nuclear norm of an affine matrix-valued function. This problem can be formulated as a semidefinite program, but the reformulation requires large auxiliary matrix variables, and is expensive to solve by general-purpose interior-point solvers. We show that problem structure in the semidefinite programming formulation can be exploited to develop more efficient implementations of interior-point methods. In the fast implementation, the cost per iteration is reduced to a quartic function of the problem dimensions and is comparable to the cost of solving the approximation problem in the Frobenius norm. In the second part of the paper, the nuclear norm approximation algorithm is applied to system identification. A variant of a simple subspace algorithm is presented in which low-rank matrix approximations are computed via nuclear norm minimization instead of the singular value decomposition. This has the important advantage of preserving linear matrix structure in the low-rank approximation. The method is shown to perform well on publicly available benchmark data.

534 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe an efficient algorithm for low-rank approximation of matrices that produces accuracy that is very close to the best possible accuracy, for matrices of arbitrary sizes.
Abstract: Principal component analysis (PCA) requires the computation of a low-rank approximation to a matrix containing the data being analyzed. In many applications of PCA, the best possible accuracy of any rank-deficient approximation is at most a few digits (measured in the spectral norm, relative to the spectral norm of the matrix being approximated). In such circumstances, efficient algorithms have not come with guarantees of good accuracy, unless one or both dimensions of the matrix being approximated are small. We describe an efficient algorithm for the low-rank approximation of matrices that produces accuracy that is very close to the best possible accuracy, for matrices of arbitrary sizes. We illustrate our theoretical results via several numerical examples.

389 citations


Journal ArticleDOI
TL;DR: The numerical experiments show that the new algorithm generally provides accuracy at least as good as the existing algorithm of Higham at no higher cost, while for matrices that are triangular or cause overscaling it usually yields significant improvements in accuracy, cost, or both.
Abstract: The scaling and squaring method for the matrix exponential is based on the approximation $e^A \approx (r_m(2^{-s}A))^{2^s}$, where $r_m(x)$ is the $[m/m]$ Pade approximant to $e^x$ and the integers $m$ and $s$ are to be chosen. Several authors have identified a weakness of existing scaling and squaring algorithms termed overscaling, in which a value of $s$ much larger than necessary is chosen, causing a loss of accuracy in floating point arithmetic. Building on the scaling and squaring algorithm of Higham [SIAM J. Matrix Anal. Appl., 26 (2005), pp. 1179-1193], which is used by the MATLAB function expm, we derive a new algorithm that alleviates the overscaling problem. Two key ideas are employed. The first, specific to triangular matrices, is to compute the diagonal elements in the squaring phase as exponentials instead of from powers of $r_m$. The second idea is to base the backward error analysis that underlies the algorithm on members of the sequence $\{\|A^k\|^{1/k}\}$ instead of $\|A\|$, since for nonnormal matrices it is possible that $\|A^k\|^{1/k}$ is much smaller than $\|A\|$, and indeed this is likely when overscaling occurs in existing algorithms. The terms $\|A^k\|^{1/k}$ are estimated without computing powers of $A$ by using a matrix 1-norm estimator in conjunction with a bound of the form $\|A^k\|^{1/k} \le \max\bigl( \|A^p\|^{1/p}, \|A^q\|^{1/q} \bigr)$ that holds for certain fixed $p$ and $q$ less than $k$. The improvements to the truncation error bounds have to be balanced by the potential for a large $\|A\|$ to cause inaccurate evaluation of $r_m$ in floating point arithmetic. We employ rigorous error bounds along with some heuristics to ensure that rounding errors are kept under control. Our numerical experiments show that the new algorithm generally provides accuracy at least as good as the existing algorithm of Higham at no higher cost, while for matrices that are triangular or cause overscaling it usually yields significant improvements in accuracy, cost, or both.

335 citations


Journal ArticleDOI
TL;DR: This method is an extension of a method of Collatz (1942) for calculating the spectral radius of an irreducible nonnegative matrix and applies the method to studying higher-order Markov chains.
Abstract: In this paper we propose an iterative method for calculating the largest eigenvalue of an irreducible nonnegative tensor. This method is an extension of a method of Collatz (1942) for calculating the spectral radius of an irreducible nonnegative matrix. Numerical results show that our proposed method is promising. We also apply the method to studying higher-order Markov chains.

300 citations


Journal ArticleDOI
TL;DR: A fast direct solver for large discretized linear systems using the supernodal multifrontal method together with low-rank approximations, especially suitable for large sparse problems and also has natural adaptability to parallel computations and great potential to provide effective preconditioners.
Abstract: In this paper we develop a fast direct solver for large discretized linear systems using the supernodal multifrontal method together with low-rank approximations. For linear systems arising from certain partial differential equations such as elliptic equations, during the Gaussian elimination of the matrices with proper ordering, the fill-in has a low-rank property: all off-diagonal blocks have small numerical ranks with proper definition of off-diagonal blocks. Matrices with this low-rank property can be efficiently approximated with semiseparable structures called hierarchically semiseparable (HSS) representations. We reveal the above low-rank property by ordering the variables with nested dissection and eliminating them with the multifrontal method. All matrix operations in the multifrontal method are performed in HSS forms. We present efficient ways to organize the HSS structured operations along the elimination. Some fast HSS matrix operations using tree structures are proposed. This new structured multifrontal method has nearly linear complexity and a linear storage requirement. Thus, we call it a superfast multifrontal method. It is especially suitable for large sparse problems and also has natural adaptability to parallel computations and great potential to provide effective preconditioners. Numerical results demonstrate the efficiency.

225 citations


Journal ArticleDOI
TL;DR: In this paper, a Newton-Grassmann algorithm for computing the best rank-1,r_2, r_3 approximation of a given tensor is presented, which is formulated as an approximation problem on a product of Grassmann manifolds.
Abstract: We derive a Newton method for computing the best rank-$(r_1,r_2,r_3)$ approximation of a given $J\times K\times L$ tensor $\mathcal{A}$. The problem is formulated as an approximation problem on a product of Grassmann manifolds. Incorporating the manifold structure into Newton's method ensures that all iterates generated by the algorithm are points on the Grassmann manifolds. We also introduce a consistent notation for matricizing a tensor, for contracted tensor products and some tensor-algebraic manipulations, which simplify the derivation of the Newton equations and enable straightforward algorithmic implementation. Experiments show a quadratic convergence rate for the Newton-Grassmann algorithm.

156 citations


Journal ArticleDOI
TL;DR: Numerical results show that, for a suitably chosen $(1,1)$ block-matrix, this constraint preconditioner outperforms the block-diagonal and theBlock-tridiagonal ones in iteration step and computing time when they are used to accelerate the GMRES method for solving these block two-by-two symmetric positive indefinite linear systems.
Abstract: We study the eigenvalue bounds of block two-by-two nonsingular and symmetric indefinite matrices whose $(1,1)$ block is symmetric positive definite and Schur complement with respect to its $(2,2)$ block is symmetric indefinite. A constraint preconditioner for this matrix is constructed by simply replacing the $(1,1)$ block by a symmetric and positive definite approximation, and the spectral properties of the preconditioned matrix are discussed. Numerical results show that, for a suitably chosen $(1,1)$ block-matrix, this constraint preconditioner outperforms the block-diagonal and the block-tridiagonal ones in iteration step and computing time when they are used to accelerate the GMRES method for solving these block two-by-two symmetric positive indefinite linear systems. The new results extend the existing ones about block two-by-two matrices of symmetric negative semidefinite $(2,2)$ blocks to those of general symmetric $(2,2)$ blocks.

150 citations


Journal ArticleDOI
TL;DR: A new metric and mean on the set of positive semidefinite matrices of fixed-rank is introduced, derived from a well-chosen Riemannian quotient geometry that generalizes the reductive geometry of the positive cone and the associated natural metric.
Abstract: This paper introduces a new metric and mean on the set of positive semidefinite matrices of fixed-rank. The proposed metric is derived from a well-chosen Riemannian quotient geometry that generalizes the reductive geometry of the positive cone and the associated natural metric. The resulting Riemannian space has strong geometrical properties: it is geodesically complete, and the metric is invariant with respect to all transformations that preserve angles (orthogonal transformations, scalings, and pseudoinversion). A meaningful approximation of the associated Riemannian distance is proposed, that can be efficiently numerically computed via a simple algorithm based on SVD. The induced mean preserves the rank, possesses the most desirable characteristics of a geometric mean, and is easy to compute.

140 citations


Journal ArticleDOI
Michele Benzi1
TL;DR: It is shown that the new scheme can outperform the standard HSS method in some situations and can be used as an effective preconditioner for certain linear systems in saddle point form.
Abstract: This paper is concerned with a generalization of the Hermitian and skew-Hermitian (HSS) splitting iteration for solving positive definite, non-Hermitian linear systems. It is shown that the new scheme can outperform the standard HSS method in some situations and can be used as an effective preconditioner for certain linear systems in saddle point form. Numerical experiments using discretizations of incompressible flow problems demonstrate the effectiveness of the generalized HSS preconditioner.

105 citations


Journal ArticleDOI
TL;DR: It is shown that the convergence of the doubling algorithm is at least linear with rate $1/2$.
Abstract: In this paper, we review two types of doubling algorithm and some techniques for analyzing them. We then use the techniques to study the doubling algorithm for three different nonlinear matrix equations in the critical case. We show that the convergence of the doubling algorithm is at least linear with rate $1/2$. As compared to earlier work on this topic, the results we present here are more general, and the analysis here is much simpler.

96 citations


Journal ArticleDOI
TL;DR: Eigenvalue bounds for perturbations of Hermitian matrices are presented and the change in eigenvalues are expressed in terms of a projection of the perturbation onto a particular eigenspace, rather than in terms.
Abstract: We present eigenvalue bounds for perturbations of Hermitian matrices and express the change in eigenvalues in terms of a projection of the perturbation onto a particular eigenspace, rather than in terms of the full perturbation. The perturbations we consider are Hermitian of rank one, and Hermitian or non-Hermitian with norm smaller than the spectral gap of a specific eigenvalue. Applications include principal component analysis under a spiked covariance model, and pseudo-arclength continuation methods for the solution of nonlinear systems.

Journal ArticleDOI
TL;DR: Two simple upper bounds for the joint spectral radius of sets of nonnegative matrices can be computed in polynomial time as solutions of convex optimization problems as well as for sets of matrices with independent column uncertainties or with independent row uncertainties.
Abstract: We propose two simple upper bounds for the joint spectral radius of sets of nonnegative matrices. These bounds, the joint column radius and the joint row radius, can be computed in polynomial time as solutions of convex optimization problems. We show that these bounds are within a factor $1/n$ of the exact value, where $n$ is the size of the matrices. Moreover, for sets of matrices with independent column uncertainties or with independent row uncertainties, the corresponding bounds coincide with the joint spectral radius. In these cases, the joint spectral radius is also given by the largest spectral radius of the matrices in the set. As a by-product of these results, we propose a polynomial-time technique for solving Boolean optimization problems related to the spectral radius. We also describe economics and engineering applications of our results.

Journal ArticleDOI
TL;DR: An inexact variant which allows one control the number of the inner iterates used in an iterative solver for each Newton step is proposed, under which the monotonicity and global convergence result of Kleinman also hold for the inexact Newton iterates.
Abstract: In this paper we consider the numerical solution of the algebraic Riccati equation using Newton's method. We propose an inexact variant which allows one control the number of the inner iterates used in an iterative solver for each Newton step. Conditions are given under which the monotonicity and global convergence result of Kleinman also hold for the inexact Newton iterates. Numerical results illustrate the efficiency of this method.

Journal ArticleDOI
TL;DR: This paper has shown that the new algorithm calculates the trace in $\mathcal{O}(m)$ flops per iteration, where $m$ is a dimension of matrices in the Lyapunov equation (the authors' coefficient matrices are treated as dense).
Abstract: This paper deals with an efficient algorithm for dampers' viscosity optimization in mechanical systems. Our algorithm optimizes the trace of the solution of the corresponding Lyapunov equation using an iterative method which calculates a low rank Cholesky factor for the solution of the corresponding Lyapunov equation. We have shown that the new algorithm calculates the trace in $\mathcal{O}(m)$ flops per iteration, where $m$ is a dimension of matrices in the Lyapunov equation (our coefficient matrices are treated as dense).

Journal ArticleDOI
TL;DR: This paper reformulates the least squares semidefinite programming with a large number of equality and inequality constraints as a system of semismooth equations with two level metric projection operators and designs an inexact smoothing Newton method to solve the resultingSemismooth system.
Abstract: In this paper, we consider the least squares semidefinite programming with a large number of equality and inequality constraints. One difficulty in finding an efficient method for solving this problem is due to the presence of the inequality constraints. In this paper, we propose to overcome this difficulty by reformulating the problem as a system of semismooth equations with two level metric projection operators. We then design an inexact smoothing Newton method to solve the resulting semismooth system. At each iteration, we use the BiCGStab iterative solver to obtain an approximate solution to the generated smoothing Newton linear system. Our numerical experiments confirm the high efficiency of the proposed method.

Journal ArticleDOI
TL;DR: First, a complete account of the reducible max-algebraic spectral theory is given, and then it is applied to characterize robust matrices.
Abstract: Let $a\oplus b=\max(a,b)$ and $a\otimes b=a+b$ for $a,b\in\overline{\mathbb{R}}:=\mathbb{R}\cup\{-\infty\}$ By max-algebra we understand the analogue of linear algebra developed for the pair of operations $(\oplus,\otimes)$, extended to matrices and vectors The symbol $A^{k}$ stands for the $k$th max-algebraic power of a square matrix $A$ Let us denote by $\varepsilon$ the max-algebraic “zero” vector, all the components of which are $-\infty$ The max-algebraic eigenvalue-eigenvector problem is the following: Given $A\in\overline{\mathbb{R}}^{n\times n}$, find all $\lambda\in\overline{\mathbb{R}}$ and $x\in\overline{\mathbb{R}}^{n}$, $x eq\varepsilon$, such that $A\otimes x=\lambda\otimes x$ Certain problems of scheduling lead to the following question: Given $A\in\overline{\mathbb{R}}^{n\times n}$, is there a $k$ such that $A^{k}\otimes x$ is a max-algebraic eigenvector of $A$? If the answer is affirmative for every $x eq\varepsilon$, then $A$ is called robust First, we give a complete account of the reducible max-algebraic spectral theory, and then we apply it to characterize robust matrices

Journal ArticleDOI
TL;DR: This paper deals with an efficient technique for computing high-quality approximations of Schur complement matrices to be used in various preconditioners for the iterative solution of finite element discretizations of elliptic boundary value problems.
Abstract: This paper deals with an efficient technique for computing high-quality approximations of Schur complement matrices to be used in various preconditioners for the iterative solution of finite element discretizations of elliptic boundary value problems. The Schur complements are based on a two-by-two block decomposition of the matrix, and their approximations are computed by assembly of local (macroelement) Schur complements. The block partitioning is done by imposing a particular node ordering following the grid refinement hierarchy in the discretization mesh. For the theoretical derivation of condition number bounds, but not for the actual application of the method, we assume that the corresponding differential operator is self-adjoint and positive definite. The numerical efficiency of the proposed Schur complement approximation is illustrated in the framework of block incomplete factorization preconditioners of multilevel type, which require approximations of a sequence of arising Schur complement matrices. The behavior of the proposed approximation is compared with that of the coarse mesh finite element matrix, commonly used as an approximation of the Schur complement in the context of the above preconditioning methods. Moreover, the influence of refining a coarse mesh using a higher refinement number ($m$) than the customary $m=2$ is analyzed and its efficiency is also illustrated by numerical tests.

Journal ArticleDOI
TL;DR: Some necessary and sufficient conditions for the solvability conditions are established in this investigation and several theoretical aspects about updating that preserves both no spill-over and positive definiteness of the mass and the stiffness matrices are highlighted.
Abstract: Updating a system modeled as a real symmetric quadratic eigenvalue problem to match observed spectral information has been an important task for practitioners in different disciplines. It is often desirable in the process to match only the newly measured data without tampering with the other unmeasured and often unknown eigenstructure inherent in the original model. Such an updating, known as no spill-over, has been critical yet challenging in practice. Only recently, a mathematical theory on updating with no spill-over has begun to be understood. However, other imperative issues such as maintaining positive definiteness in the coefficient matrices remain to be addressed. This paper highlights several theoretical aspects about updating that preserves both no spill-over and positive definiteness of the mass and the stiffness matrices. In particular, some necessary and sufficient conditions for the solvability conditions are established in this investigation.

Journal ArticleDOI
TL;DR: Only under the condition that B is a stable perturbation of A, two explicit formulas for the Drazin inverse B^D and the spectral projector B^\pi are provided, respectively, and some upper bounds for $\Vert B-A\Vert/\Vert A^D\Vert$ are derived from these formulas under certain conditions.
Abstract: For any $n\times n$ complex matrix $A$, let $A^D$ and $A^\pi$ be the Drazin inverse and the spectral projector of $A$, respectively, where $A^\pi=I-AA^D$. When $A$ is singular, an $n\times n$ complex matrix $B$ is said to be a stable perturbation of $A$ if $I-A^\pi-B^\pi$ is nonsingular or, equivalently, if the matrix $B$ satisfies condition (${\cal C}_s$) recently introduced by Castro-Gonzalez, Robles, and Velez-Cerrada [SIAM J. Matrix Anal. Appl., 30 (2008), pp. 882-897]. In the perturbation analysis of the Drazin inverse, the condition of $\Vert B-A\Vert$ being small is usually implicitly assumed in the literature. In this case, the condition of $B$ being a stable perturbation of $A$ is necessary in order to ensure the continuity of the Drazin inverse. In this paper, only under the condition that $B$ is a stable perturbation of $A$, two explicit formulas for the Drazin inverse $B^D$ and the spectral projector $B^\pi$ are provided, respectively, and some upper bounds for $\Vert B^D-A^D\Vert/\Vert A^D\Vert$ and $\Vert B^\pi-A^\pi\Vert$ are derived from these formulas under certain conditions. In the case where $\Vert B-A\Vert$ is small, numerical examples are given indicating the sharpness of these norm upper bounds. Furthermore, a numerical example is provided to illustrate that a perturbation analysis of the Drazin inverse can also be conducted, even if $\Vert B-A\Vert$ is not small.

Journal ArticleDOI
TL;DR: Computational aspects are discussed and order optimal error bounds that characterize the accuracy of the regularized solutions are provided that extend earlier results where the operator is exactly given.
Abstract: For solving linear ill-posed problems, regularization methods are required when the right-hand side and/or the operator are corrupted by some noise. In the present paper, regularized solutions are constructed using regularized total least squares (RTLS) and dual regularized total least squares (DRTLS). We discuss computational aspects and provide order optimal error bounds that characterize the accuracy of the regularized solutions. The results extend earlier results where the operator is exactly given. We also present some numerical experiments, which shed light on the relationship between RTLS, DRTLS, and the standard Tikhonov regularization.

Journal ArticleDOI
TL;DR: It is shown that an appropriate small rank change to the preconditioner can produce significant savings in costs and, in particular, can produce a situation where there is no increase in the costs of the iterative solves even though the solve tolerances are reducing.
Abstract: Convergence results are provided for inexact inverse subspace iteration applied to the problem of finding the invariant subspace associated with a small number of eigenvalues of a large sparse matrix. These results are illustrated by the use of block-GMRES as the iterative solver. The costs of the inexact solves are measured by the number of inner iterations needed by the iterative solver at each outer step of the algorithm. It is shown that for a decreasing tolerance the number of inner iterations should not increase as the outer iteration proceeds, but it may increase for preconditioned iterative solves. However, it is also shown that an appropriate small rank change to the preconditioner can produce significant savings in costs and, in particular, can produce a situation where there is no increase in the costs of the iterative solves even though the solve tolerances are reducing. Numerical examples are provided to illustrate the theory.

Journal ArticleDOI
TL;DR: Eigenvalue intervals for symmetric saddle point and regularized saddle point matrices in the case where the (1,1) block may be indefinite are provided and the spectral properties of the equivalent augmented formulation are studied.
Abstract: We provide eigenvalue intervals for symmetric saddle point and regularized saddle point matrices in the case where the (1,1) block may be indefinite. These generalize known results for the definite (1,1) case. We also study the spectral properties of the equivalent augmented formulation, which is an alternative to explicitly dealing with the indefinite (1,1) block. Such an analysis may be used to assess the convergence of suitable Krylov subspace methods. We conclude with spectral analyses of the effects of common block-diagonal preconditioners.

Journal ArticleDOI
TL;DR: This paper provides a construction for $X$ and $Y$ to achieve the desired inertia/rank that uses only unitary/orthogonal transformation, thus leading to a numerically reliable construction.
Abstract: In this paper we consider the admissible inertias and ranks of the expressions $A-BXB^*-CYC^*$ and $A-BXC^*\pm CX^*B^*$ with unknowns $X$ and $Y$ in the four cases when these expressions are: (i) complex self-adjoint, (ii) complex skew-adjoint, (iii) real symmetric, (iv) real skew symmetric. We also provide a construction for $X$ and $Y$ to achieve the desired inertia/rank that uses only unitary/orthogonal transformation, thus leading to a numerically reliable construction. In addition, we look at related block matrix completion problems $\left[\begin{array}{@{}[email protected]{}} {\cal A} & {\cal B} & {\cal C} \ \pm {\cal B}^\star & {\cal X} & {\cal E} \\\pm {\cal C}^\star & \pm {\cal E}^\star & {\cal Y} \end{array}\right]$ with either two diagonal unknown blocks and $\left[\begin{array}{@{}[email protected]{}} {\cal A} & {\cal B} & {\cal X} \ \pm {\cal B}^\star & {\cal D} & {\cal C} \ \pm {\cal X}^\star & \pm {\cal C}^\star & {\cal E} \end{array}\right]$ with an unknown off-diagonal block. Finally, we also provide all admissible ranks in the case when we drop any adjointness/symmetry constraint.

Journal ArticleDOI
TL;DR: For the important special case of quadratics, it is shown how a definite quadratic polynomial can be transformed into a definite linearization with a positive definite leading coefficient matrix—a form that is particularly attractive numerically.
Abstract: Hyperbolic matrix polynomials are an important class of Hermitian matrix polynomials that contain overdamped quadratics as a special case. They share with definite pencils the spectral property that their eigenvalues are real and semisimple. We extend the definition of hyperbolic matrix polynomial in a way that relaxes the requirement of definiteness of the leading coefficient matrix, yielding what we call definite polynomials. We show that this class of polynomials has an elegant characterization in terms of definiteness intervals on the extended real line and that it includes definite pencils as a special case. A fundamental question is whether a definite matrix polynomial $P$ can be linearized in a structure-preserving way. We show that the answer to this question is affirmative: $P$ is definite if and only if it has a definite linearization in $\mathbb{H}(P)$, a certain vector space of Hermitian pencils; and for definite $P$ we give a complete characterization of all the linearizations in $\mathbb{H}(P)$ that are definite. For the important special case of quadratics, we show how a definite quadratic polynomial can be transformed into a definite linearization with a positive definite leading coefficient matrix—a form that is particularly attractive numerically.

Journal ArticleDOI
TL;DR: In this paper, a sharp majorization-type RR error bound in terms of principal angles between the Ritz values of the Rayleigh quotient and the eigenvalues of a Hermitian operator was established.
Abstract: The Rayleigh-Ritz (RR) method finds the stationary values, called Ritz values, of the Rayleigh quotient on a given trial subspace as approximations to eigenvalues of a Hermitian operator $A$. If the trial subspace is $A$-invariant, the Ritz values are exactly some of the eigenvalues of $A$. Given two subspaces $\mathcal{X}$ and $\mathcal{Y}$ of the same finite dimension, such that $\mathcal{X}$ is $A$-invariant, the absolute changes in the Ritz values of $A$ with respect to $\mathcal{X}$ compared to the Ritz values with respect to $\mathcal{Y}$ represent the RR absolute eigenvalue approximation error. Our first main result is a sharp majorization-type RR error bound in terms of the principal angles between $\mathcal{X}$ and $\mathcal{Y}$ for an arbitrary $A$-invariant $\mathcal{X}$, which was a conjecture in [SIAM J. Matrix Anal. Appl., 30 (2008), pp. 548-559]. Second, we prove a novel type of RR error bound that deals with the products of the errors, rather than the sums. Third, we establish majorization bounds for the relative errors. We extend our bounds to the case $\dim\mathcal{X}\leq\dim\mathcal{Y}<\infty$ in Hilbert spaces and apply them in the context of the finite element method.

Journal ArticleDOI
TL;DR: This paper focuses on the computation of the joint spectral radius $\rho({\cal F})$ via the detection of an extremal norm in the class of complex polytope norms, whose unit balls are balanced complex polytopes with a finite essential system of vertices.
Abstract: In this paper we consider finite families ${\cal F}$ of real $n\!\times\!n$ matrices In particular, we focus on the computation of the joint spectral radius $\rho({\cal F})$ via the detection of an extremal norm in the class of complex polytope norms, whose unit balls are balanced complex polytopes with a finite essential system of vertices Such a finiteness property is very useful in view of the construction of efficient computational algorithms More precisely, we improve the results obtained in our previous paper [N Guglielmi, F Wirth, and M Zennaro, SIAM J Matrix Anal Appl, 27 (2005), pp 721-743], where we gave some conditions on the family ${\cal F}$ which are sufficient to guarantee the existence of an extremal complex polytope norm Unfortunately, they exclude unnecessarily many interesting cases of real families Therefore, here we relax the conditions given in our previous paper in order to provide a more satisfactory treatment of the real case

Journal ArticleDOI
TL;DR: It is shown that small rank changes to the preconditioner can produce significant savings in the total number of iterations, and the combination of the new preconditionser with the relaxation strategy in implicitly restarted Arnoldi produces enhancement in the overall costs of the method.
Abstract: We consider the computation of a few eigenvectors and corresponding eigenvalues of a large sparse nonsymmetric matrix using shift-invert Arnoldi's method with and without implicit restarts. For the inner iterations we use preconditioned GMRES as the inexact iterative solver. The costs of the solves are measured by the number of inner iterations needed by the iterative solver at each outer step of the algorithm. We first extend the relaxation strategy developed by Simoncini [SIAM J. Numer. Anal., 43 (2005), pp. 1155-1174] to implicitly restarted Arnoldi's method, which yields an improvement in the overall costs of the method. Secondly, we apply a new preconditioning strategy to the inner solver. We show that small rank changes to the preconditioner can produce significant savings in the total number of iterations. The combination of the new preconditioner with the relaxation strategy in implicitly restarted Arnoldi produces enhancement in the overall costs of around 50 percent in the examples considered here. Numerical experiments illustrate the theory throughout the paper.

Journal ArticleDOI
TL;DR: A new rank-revealing algorithm for low rank matrices with efficient and straightforward updating/downdating capabilities is presented and the numerical results show that the new algorithm appears to be efficient and robust.
Abstract: As one of the basic problems in matrix computation, rank-revealing arises in a wide variety of applications in scientific computing. Although the singular value decomposition is the standard rank-revealing method, it is costly in both computing time and storage when the rank or the nullity is low, and it is inefficient in updating and downdating when rows and columns are inserted or deleted. Consequently, alternative methods are in demand in those situations. Following up on a recent rank-revealing algorithm by Li and Zeng for the low nullity case, we present a new rank-revealing algorithm for low rank matrices with efficient and straightforward updating/downdating capabilities. The method has been implemented in Matlab, and the numerical results show that the new algorithm appears to be efficient and robust.

Journal ArticleDOI
TL;DR: In this article, a graph-theoretic test for determining when the sign pattern of the Jacobian of a chemical reaction dynamics is unambiguous is presented. But the main topic is counting the number of positive and negative coefficients in the determinant expansion of sign patterns and of these Jacobians.
Abstract: This paper treats two topics: Matrices with sign patterns and Jacobians of certain mappings on the nonnegative orthant $\mathbb{R}_{\geq0}^d$. The main topic is counting the number of positive and negative coefficients in the determinant expansion of sign patterns and of these Jacobians. The paper is motivated by an approach to chemical networks initiated by Craciun and Feinberg [SIAM J. Appl. Math., 65 (2005), pp. 1526-1546; SIAM J. Appl. Math., 66 (2006), pp. 1321-1338]. We also give a graph-theoretic test for determining when the sign pattern of the Jacobian of a chemical reaction dynamics is unambiguous.

Journal ArticleDOI
TL;DR: The algorithm is revisit and it is shown that proper handling of three details of the algorithm is crucial to the efficiency and reliability: how the midpoint of an arc is computed, whether shrinkage of an Arc is permitted, and how directions of negative curvature are computed.
Abstract: A 25-year-old and somewhat neglected algorithm of Crawford and Moon attempts to determine whether a given Hermitian matrix pair $(A,B)$ is definite by exploring the range of the function $f(x) = x^*(A+iB)x / | x^*(A+iB)x |$, which is a subset of the unit circle. We revisit the algorithm and show that with suitable modifications and careful attention to implementation details it provides a reliable and efficient means of testing definiteness. A clearer derivation of the basic algorithm is given that emphasizes an arc expansion viewpoint and makes no assumptions about the definiteness of the pair. Convergence of the algorithm is proved for all $(A,B$), definite or not. It is shown that proper handling of three details of the algorithm is crucial to the efficiency and reliability: how the midpoint of an arc is computed, whether shrinkage of an arc is permitted, and how directions of negative curvature are computed. For the latter, several variants of Cholesky factorization with complete pivoting are explored and the benefits of pivoting demonstrated. The overall cost of our improved algorithm is typically just a few Cholesky factorizations. Three applications of the algorithm are described: testing the hyperbolicity of a Hermitian quadratic matrix polynomial, constructing conjugate gradient methods for sparse linear systems in saddle point form, and computing the Crawford number of the pair $(A,B)$ via a quasiconvex univariate minimization problem.