scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 2007"


Journal ArticleDOI
TL;DR: This work defines the Log‐Euclidean mean from a Riemannian point of view, based on a lie group structure which is compatible with the usual algebraic properties of this matrix space and a new scalar multiplication that smoothly extends the Lie group structure into a vector space structure.
Abstract: In this work we present a new generalization of the geometric mean of positive numbers on symmetric positive‐definite matrices, called Log‐Euclidean. The approach is based on two novel algebraic structures on symmetric positive‐definite matrices: first, a lie group structure which is compatible with the usual algebraic properties of this matrix space; second, a new scalar multiplication that smoothly extends the Lie group structure into a vector space structure. From bi‐invariant metrics on the Lie group structure, we define the Log‐Euclidean mean from a Riemannian point of view. This notion coincides with the usual Euclidean mean associated with the novel vector space structure. Furthermore, this means corresponds to an arithmetic mean in the domain of matrix logarithms. We detail the invariance properties of this novel geometric mean and compare it to the recently introduced affine‐invariant mean. The two means have the same determinant and are equal in a number of cases, yet they are not identical in g...

791 citations


Journal ArticleDOI
TL;DR: It is shown that the increment-based computational approach gives locally quasi-optimal low-rank approximations that are well suited for numerical integration.
Abstract: For the low-rank approximation of time-dependent data matrices and of solutions to matrix differential equations, an increment-based computational approach is proposed and analyzed. In this method, the derivative is projected onto the tangent space of the manifold of rank-$r$ matrices at the current approximation. With an appropriate decomposition of rank-$r$ matrices and their tangent matrices, this yields nonlinear differential equations that are well suited for numerical integration. The error analysis compares the result with the pointwise best approximation in the Frobenius norm. It is shown that the approach gives locally quasi-optimal low-rank approximations. Numerical experiments illustrate the theoretical results.

262 citations


Journal ArticleDOI
TL;DR: This paper discusses a new class of matrix nearness problems that measure approximation error using a directed distance measure called a Bregman divergence, and proposes a framework for studying these problems, discusses some specific matrixNearness problems, and provides algorithms for solving them numerically.
Abstract: This paper discusses a new class of matrix nearness problems that measure approximation error using a directed distance measure called a Bregman divergence. Bregman divergences offer an important generalization of the squared Frobenius norm and relative entropy, and they all share fundamental geometric properties. In addition, these divergences are intimately connected with exponential families of probability distributions. Therefore, it is natural to study matrix approximation problems with respect to Bregman divergences. This article proposes a framework for studying these problems, discusses some specific matrix nearness problems, and provides algorithms for solving them numerically. These algorithms apply to many classical and novel problems, and they admit a striking geometric interpretation.

193 citations


Journal ArticleDOI
TL;DR: This work considers large scale sparse linear systems in saddle point form and presents a different approach that is not based on such an explicit augmentation technique, which leads to convergence rates of the preconditioned conjugate gradient method that are not only independent of the mesh size but alsoindependent of the regularization parameter.
Abstract: We consider large scale sparse linear systems in saddle point form. A natural property of such indefinite 2-by-2 block systems is the positivity of the (1,1) block on the kernel of the (2,1) block. Many solution methods, however, require that the positivity of the (1,1) block is satisfied everywhere. To enforce the positivity everywhere, an augmented Lagrangian approach is usually chosen. However, the adjustment of the involved parameters is a critical issue. We will present a different approach that is not based on such an explicit augmentation technique. For the considered class of symmetric and indefinite preconditioners, assumptions are presented that lead to symmetric and positive definite problems with respect to a particular scalar product. Therefore, conjugate gradient acceleration can be used. An important class of applications are optimal control problems. It is typical for such problems that the cost functional contains an extra regularization parameter. For control problems with elliptic state equations and distributed control, a special indefinite preconditioner for the discretized problem is constructed, which leads to convergence rates of the preconditioned conjugate gradient method that are not only independent of the mesh size but also independent of the regularization parameter. Numerical experiments are presented for illustrating the theoretical results.

177 citations


Journal ArticleDOI
TL;DR: The quest for a highly accurate and efficient SVD algorithm has led to a new, superior variant of the Jacobi algorithm, which has inherited all good high accuracy properties and can outperform the QR algorithm.
Abstract: This paper is the result of concerted efforts to break the barrier between numerical accuracy and run-time efficiency in computing the fundamental decomposition of numerical linear algebra—the singular value decomposition (SVD) of general dense matrices. It is an unfortunate fact that the numerically most accurate one-sided Jacobi SVD algorithm is several times slower than generally less accurate bidiagonalization-based methods such as the QR or the divide-and-conquer algorithm. Our quest for a highly accurate and efficient SVD algorithm has led us to a new, superior variant of the Jacobi algorithm. The new algorithm has inherited all good high accuracy properties of the Jacobi algorithm, and it can outperform the QR algorithm.

154 citations


Journal ArticleDOI
TL;DR: The results are shown to be entirely consistent with those of Higham, Mackey, and Tisseur on the conditioning of linearizations of $P and to derive backward error bounds depending only on the norms of the $A_i$ for the companion pencils and for the vector space of pencils recently identified.
Abstract: The most widely used approach for solving the polynomial eigenvalue problem $P(\lambda)x = (\sum_{i=0}^m \l^i A_i) x = 0$ in $n\times n$ matrices $A_i$ is to linearize to produce a larger order pencil $L(\lambda) = \lambda X + Y$, whose eigensystem is then found by any method for generalized eigenproblems. For a given polynomial $P$, infinitely many linearizations $L$ exist and approximate eigenpairs of $P$ computed via linearization can have widely varying backward errors. We show that if a certain one-sided factorization relating $L$ to $P$ can be found then a simple formula permits recovery of right eigenvectors of $P$ from those of $L$, and the backward error of an approximate eigenpair of $P$ can be bounded in terms of the backward error for the corresponding approximate eigenpair of $L$. A similar factorization has the same implications for left eigenvectors. We use this technique to derive backward error bounds depending only on the norms of the $A_i$ for the companion pencils and for the vector space $\mathbb{DL}(P)$ of pencils recently identified by Mackey, Mackey, Mehl, and Mehrmann. In all cases, sufficient conditions are identified for an optimal backward error for $P$. These results are shown to be entirely consistent with those of Higham, Mackey, and Tisseur on the conditioning of linearizations of $P$. Other contributions of this work are a block scaling of the companion pencils that yields improved backward error bounds; a demonstration that the bounds are applicable to certain structured linearizations of structured polynomials; and backward error bounds specialized to the quadratic case, including analysis of the benefits of a scaling recently proposed by Fan, Lin, and Van Dooren. The results herein make no assumptions on the stability of the method applied to $L$ or whether the method is direct or iterative.

118 citations


Journal ArticleDOI
TL;DR: A new superfast solver for Toeplitz systems of linear equations that enables us to quickly approximate the Cauchy-like matrices by structured matrices called sequentially semiseparable (SSS) matrices and is stable in practice.
Abstract: In this paper we develop a new superfast solver for Toeplitz systems of linear equations. To solve Toeplitz systems many people use displacement equation methods. With displacement structures, Toeplitz matrices can be transformed into Cauchy-like matrices using the FFT or other trigonometric transformations. These Cauchy-like matrices have a special property, that is, their off-diagonal blocks have small numerical ranks. This low-rank property plays a central role in our superfast Toeplitz solver. It enables us to quickly approximate the Cauchy-like matrices by structured matrices called sequentially semiseparable (SSS) matrices. The major work of the constructions of these SSS forms can be done in precomputations (independent of the Toeplitz matrix entries). These SSS representations are compact because of the low-rank property. The SSS Cauchy-like systems can be solved in linear time with linear storage. Excluding precomputations the main operations are the FFT and SSS system solve, which are both very efficient. Our new Toeplitz solver is stable in practice. Numerical examples are presented to illustrate the efficiency and the practical stability.

108 citations


Journal ArticleDOI
TL;DR: The contribution of this paper is to show that the high relative accuracy is preserved by operations that preserve the total nonnegativity—taking a product, re-signed inverse, converse, Schur complement, or submatrix of a totally nonnegative matrix, any of which costs at most $\mathcal{O}(\max(m^3,n^3))$.
Abstract: We consider the problem of performing accurate computations with rectangular $(m\times n)$ totally nonnegative matrices. The matrices under consideration have the property of having a unique representation as products of nonnegative bidiagonal matrices. Given that representation, one can compute the inverse, LDU decomposition, eigenvalues, and SVD of a totally nonnegative matrix to high relative accuracy in $\mathcal{O}(\max(m^3,n^3))$ time—much more accurately than conventional algorithms that ignore that structure. The contribution of this paper is to show that the high relative accuracy is preserved by operations that preserve the total nonnegativity—taking a product, re-signed inverse (when $m=n$), converse, Schur complement, or submatrix of a totally nonnegative matrix, any of which costs at most $\mathcal{O}(\max(m^3,n^3))$. In other words, the class of totally nonnegative matrices for which we can do numerical linear algebra very accurately in $\mathcal{O}(\max(m^3,n^3))$ time (namely, those for which we have a product representation via nonnegative bidiagonals) is closed under the operations listed above.

105 citations


Journal ArticleDOI
TL;DR: The Latouche-Ramaswami algorithm, combined with a shift technique suggested by He, Meini, and Rhee, is breakdown-free and is able to find the minimal solution more efficiently and more accurately than the algorithm without a shift.
Abstract: We study the nonsymmetric algebraic Riccati equation whose four coefficient matrices are the blocks of a nonsingular $M$-matrix or an irreducible singular $M$-matrix $M$. The solution of practical interest is the minimal nonnegative solution. We show that Newton’s method with zero initial guess can be used to find this solution without any further assumptions. We also present a qualitative perturbation analysis for the minimal solution, which is instructive in designing algorithms for finding more accurate approximations. For the most practically important case, in which $M$ is an irreducible singular $M$-matrix with zero row sums, the minimal solution is either stochastic or substochastic and the Riccati equation can be transformed into a unilateral matrix equation by a procedure of Ramaswami. The minimal solution of the Riccati equation can then be found by computing the minimal nonnegative solution of the unilateral equation using the Latouche-Ramaswami algorithm. When the minimal solution of the Riccati equation is stochastic, we show that the Latouche-Ramaswami algorithm, combined with a shift technique suggested by He, Meini, and Rhee, is breakdown-free and is able to find the minimal solution more efficiently and more accurately than the algorithm without a shift. When the minimal solution of the Riccati equation is substochastic, we show how the substochastic minimal solution can be found by computing the stochastic minimal solution of a related Riccati equation of the same type.

94 citations


Journal ArticleDOI
TL;DR: In this article, the rank-constrained matrix approximation in Frobenius norm has been studied, which is a generalization of the classical approximation of an $m\times n$ matrix by a matrix of, at most, rank k.
Abstract: In this paper we give an explicit solution to the rank-constrained matrix approximation in Frobenius norm, which is a generalization of the classical approximation of an $m\times n$ matrix $A$ by a matrix of, at most, rank $k$.

88 citations


Journal ArticleDOI
TL;DR: A Jordan decomposition of the Google matrix for the (theoretical) extreme case when all Web pages are dangling nodes, when it is required to distinguish among different classes of dangling nodes.
Abstract: We present a simple algorithm for computing the PageRank (stationary distribution) of the stochastic Google matrix $G$. The algorithm lumps all dangling nodes into a single node. We express lumping as a similarity transformation of $G$ and show that the PageRank of the nondangling nodes can be computed separately from that of the dangling nodes. The algorithm applies the power method only to the smaller lumped matrix, but the convergence rate is the same as that of the power method applied to the full matrix $G$. The efficiency of the algorithm increases as the number of dangling nodes increases. We also extend the expression for PageRank and the algorithm to more general Google matrices that have several different dangling node vectors, when it is required to distinguish among different classes of dangling nodes. We also analyze the effect of the dangling node vector on the PageRank and show that the PageRank of the dangling nodes depends strongly on that of the nondangling nodes but not vice versa. Last we present a Jordan decomposition of the Google matrix for the (theoretical) extreme case when all Web pages are dangling nodes.

Journal ArticleDOI
TL;DR: Nonsymmetric algebraic Riccati equations for which the four coefficient matrices form an irreducible singular $M$-matrix, which arises in the study of Markov models, are considered.
Abstract: Nonsymmetric algebraic Riccati equations for which the four coefficient matrices form an irreducible $M$-matrix $M$ are considered. The emphasis is on the case where $M$ is an irreducible singular $M$-matrix, which arises in the study of Markov models. The doubling algorithm is considered for finding the minimal nonnegative solution, the one of practical interest. The algorithm has been recently studied by others for the case where $M$ is a nonsingular $M$-matrix. A shift technique is proposed to transform the original Riccati equation into a new Riccati equation for which the four coefficient matrices form a nonsingular matrix. The convergence of the doubling algorithm is accelerated when it is applied to the shifted Riccati equation.

Journal ArticleDOI
TL;DR: This work extends the idea of using constraint preconditioners that have a specific 2 by 2 block structure for the case of $C$ being zero by allowing the (2, 2) block to be symmetric and positive semidefinite.
Abstract: The problem of finding good preconditioners for the numerical solution of an important class of indefinite linear systems is considered. These systems are of a regularized saddle point structure $[\begin{smallmatrix} A &\quad B^T \\ B &\quad -C \end{smallmatrix}] [\begin{smallmatrix} x \\ y \end{smallmatrix}] = [\begin{smallmatrix} c \\ d \end{smallmatrix}], $ where $A\in\mathbb R ^{n\times n}$, $C\in\mathbb R ^{m\times m}$ are symmetric and $B\in\mathbb R ^{m\times n}$. In [SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1300-1317], Keller, Gould, and Wathen analyze the idea of using constraint preconditioners that have a specific 2 by 2 block structure for the case of $C$ being zero. We shall extend this idea by allowing the (2, 2) block to be symmetric and positive semidefinite. Results concerning the spectrum and form of the eigenvectors are presented, as are numerical results to validate our conclusions.

Journal ArticleDOI
TL;DR: It is shown that the CG method with variable preconditioning under this assumption may not give improvement, compared to the steepest descent (SD) method, and a new elegant geometric proof of the SD convergence rate bound is given.
Abstract: We analyze the conjugate gradient (CG) method with variable preconditioning for solving a linear system with a real symmetric positive definite (SPD) matrix of coefficients $A$. We assume that the preconditioner is SPD on each step, and that the condition number of the preconditioned system matrix is bounded above by a constant independent of the step number. We show that the CG method with variable preconditioning under this assumption may not give improvement, compared to the steepest descent (SD) method. We describe the basic theory of CG methods with variable preconditioning with the emphasis on “worst case” scenarios, and provide complete proofs of all facts not available in the literature. We give a new elegant geometric proof of the SD convergence rate bound. Our numerical experiments, comparing the preconditioned SD and CG methods, not only support and illustrate our theoretical findings, but also reveal two surprising and potentially practically important effects. First, we analyze variable preconditioning in the form of inner-outer iterations. In previous such tests, the unpreconditioned CG inner iterations are applied to an artificial system with some fixed preconditioner as a matrix of coefficients. We test a different scenario, where the unpreconditioned CG inner iterations solve linear systems with the original system matrix $A$. We demonstrate that the CG-SD inner-outer iterations perform as well as the CG-CG inner-outer iterations in these tests. Second, we compare the CG methods using a two-grid preconditioning with fixed and randomly chosen coarse grids, and observe that the fixed preconditioner method is twice as slow as the method with random preconditioning.

Journal ArticleDOI
TL;DR: The reverse order rule $(AB)=B^\dag=B + A = A for the Moore-Penrose inverse is established in several equivalent forms.
Abstract: The reverse order rule $(AB)^\dag=B^\dag A^\dag$ for the Moore-Penrose inverse is established in several equivalent forms. Results related to other generalized inverses are also proved.

Journal ArticleDOI
TL;DR: The paper presents some novel fast adaptations of the shifted QR algorithm for computing the eigenvalues of a matrix $A\in \mathcal H_n$ where each step can be performed with $O(n)$ flops and memory space.
Abstract: Let $\mathcal H_n\subset \mathbb C^{n\times n}$ be the class of $n\times n$ Hessenberg matrices $A$ which are rank-one modifications of a unitary matrix, that is, $A=H +\B u\B w^H$, where $H$ is unitary and $\mathbf{u}, \mathbf{w}\in \mathbb C^n$. The class $\mathcal H_n$ includes three well-known subclasses: unitary Hessenberg matrices, companion (Frobenius) matrices, and fellow matrices. The paper presents some novel fast adaptations of the shifted QR algorithm for computing the eigenvalues of a matrix $A\in \mathcal H_n$ where each step can be performed with $O(n)$ flops and $O(n)$ memory space. Numerical experiments confirm the effectiveness and the robustness of these methods.

Journal ArticleDOI
TL;DR: A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynometric filtering step, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts.
Abstract: A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidson-type methods. The typical filter used in this paper is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts. Numerical results are presented to show the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: This work proposes a relaxation of the Chebyshev center, which is the vector that minimizes the worst-case estimation error over all feasible vectors, and demonstrates via numerical examples that this estimator can outperform other conventional methods, such as least-Squares and regularized least-squares, with respect to the estimation error.
Abstract: We consider the problem of estimating a vector ${\bf z}$ in the regression model $\mbox{$\mathcal{B}$} = {\bf A} {\bf z} + {\bf w}$, where ${\bf w}$ is an unknown but bounded noise. As in many regularization schemes, we assume that an upper bound on the norm of ${\bf z}$ is available. To estimate ${\bf z}$ we propose a relaxation of the Chebyshev center, which is the vector that minimizes the worst-case estimation error over all feasible vectors ${\bf z}$. Relying on recent results regarding strong duality of nonconvex quadratic optimization problems with two quadratic constraints, we prove that in the complex domain our approach leads to the exact Chebyshev center. In the real domain, this strategy results in a “pretty good” approximation of the true Chebyshev center. As we show, our estimate can be viewed as a Tikhonov regularization with a special choice of parameter that can be found efficiently by solving a convex optimization problem with two variables or a semidefinite program with three variables, regardless of the problem size. When the norm constraint on ${\bf z}$ is a Euclidean one, the problem reduces to a single-variable convex minimization problem. We then demonstrate via numerical examples that our estimator can outperform other conventional methods, such as least-squares and regularized least-squares, with respect to the estimation error. Finally, we extend our methodology to other feasible parameter sets, showing that the total least-squares (TLS) and regularized TLS can be obtained as special cases of our general approach.

Journal ArticleDOI
TL;DR: This work focuses on the generic behavior of the KCF of the perturbation pencils of P+Q, i.e., the behavior appearing for perturbations $Q(\lambda) in a dense open subset of the pencils with rank r.
Abstract: Let $P(\lambda) = A_0 + \lambda A_1$ be a singular $m\times n$ matrix pencil without full rank whose Kronecker canonical form (KCF) is given. Let r be a positive integer such that $\rho \leq \min\{m,n\}- {\rm rank} (P)$ and $\rho \leq {\rm rank}(P)$. We study the change of the KCF of $P(\lambda)$ due to perturbation pencils $Q(\lambda)$ with ${\rm rank} (Q) = \rho$. We focus on the generic behavior of the KCF of $(P+Q)(\lambda)$, i.e., the behavior appearing for perturbations $Q(\lambda)$ in a dense open subset of the pencils with rank r. The most remarkable generic properties of the KCF of the perturbed pencil $(P+Q) (\lambda)$ are (i) if $\lambda_0$ is an eigenvalue of $P(\lambda)$, finite or infinite, then $\lambda_0$ is an eigenvalue of $(P+Q)(\lambda)$; (ii) if $\lambda_0$ is an eigenvalue of $P(\lambda)$, then the number of Jordan blocks associated with $\lambda_0$ in the KCF of $(P+Q) (\lambda)$ is equal to or greater than the number of Jordan blocks associated with $\lambda_0$ in the KCF of $P(\lambda)$; (iii) if $\lambda_0$ is an eigenvalue of $P(\lambda)$, then the dimensions of the Jordan blocks associated with $\lambda_0$ in $(P+Q) (\lambda)$ are equal to or greater than the dimensions of the Jordan blocks associated with $\lambda_0$ in $P (\lambda)$; (iv) the row (column) minimal indices of $(P+Q) (\lambda)$ are equal to or greater than the largest row (column) minimal indices of $P(\lambda)$. Moreover, if the sum of the row (column) minimal indices of the perturbations $Q(\lambda)$ is known, apart from their rank, then the whole set of the row (column) minimal indices of $(P+Q) (\lambda)$ is generically obtained, and in the case $\rho < \min\{m,n\}- {\rm rank} (P)$ the whole KCF of $(P+Q) (\lambda)$ is generically determined.

Journal ArticleDOI
TL;DR: Admissible sets of data concerning systems of eigenvalues and eigenvectors are examined, and procedures for generating associated (isospectral) families of systems are developed.
Abstract: Computational schemes are investigated for the solution of inverse spectral problems for $n \times n$ real systems of the form $L(\lambda) = M\lambda^2 + D\lambda + K$. Thus, admissible sets of data concerning systems of eigenvalues and eigenvectors are examined, and procedures for generating associated (isospectral) families of systems are developed. The analysis includes symmetric systems, systems with mixed real/nonreal spectrum, systems with positive definite coefficients, and hyperbolic systems (with real spectrum). A one‐to‐one correspondence between Jordan pairs and structure preserving similarities is clarified. An examination of complex symmetric matrices is included.

Journal ArticleDOI
TL;DR: A multigrid algorithm is developed that computes the solution of the Sylvester equation in a data-sparse format and is of complexity $\mathcal{O}(n+m)k^{2})$.
Abstract: We consider the Sylvester equation $AX-XB+C=0$, where the matrix $C\in\mathbb{R}^{n\times m}$ is of low rank and the spectra of $A\in\mathbb{R}^{n\times n} $ and $B\in\mathbb{R}^{m\times m}$ are separated by a line. The solution $X$ can be approximated in a data-sparse format, and we develop a multigrid algorithm that computes the solution in this format. For the multigrid method to work, we need a hierarchy of discretizations. Here the matrices $A$ and $B$ each stem from the discretization of a partial differential operator of elliptic type. The algorithm is of complexity $\mathcal{O}(n+m)$, or, more precisely, if the solution can be represented with $(n+m)k$ data ($k\sim \log(n+m)$), then the complexity of the algorithm is $\mathcal{O}((n+m)k^{2})$.

Journal ArticleDOI
TL;DR: In this paper, the condition number of a linear function of a matrix of full column rank (n) for which the exact condition number can be estimated in random orthogonal directions is given.
Abstract: We consider here the linear least squares problem $\min_{y \in \mathbb{R}^n}\|Ay-b\|_2$, where $b \in \mathbb{R}^m$ and $A \in \mathbb{R}^{m\times n}$ is a matrix of full column rank $n$, and we denote $x$ its solution. We assume that both $A$ and $b$ can be perturbed and that these perturbations are measured using the Frobenius or the spectral norm for $A$ and the Euclidean norm for $b$. In this paper, we are concerned with the condition number of a linear function of $x$ ($L^Tx$, where $L \in \mathbb{R}^{n\times k}$) for which we provide a sharp estimate that lies within a factor $\sqrt{3}$ of the true condition number. Provided the triangular $R$ factor of $A$ from $A^TA=R^TR$ is available, this estimate can be computed in $2kn^2$ flops. We also propose a statistical method that estimates the partial condition number by using the exact condition numbers in random orthogonal directions. If $R$ is available, this statistical approach enables us to obtain a condition estimate at a lower computational cost. In the case of the Frobenius norm, we derive a closed formula for the partial condition number that is based on the singular values and the right singular vectors of the matrix $A$.

Journal ArticleDOI
TL;DR: From the necessary and sufficient conditions for complete reachability and observability of periodic descriptor systems with time-varying dimensions, the symmetric positive semidefinite reachability/observability Gramians are defined and shown to satisfy some projected generalized discrete-time periodic Lyapunov equations.
Abstract: From the necessary and sufficient conditions for complete reachability and observability of periodic descriptor systems with time-varying dimensions, the symmetric positive semidefinite reachability/observability Gramians are defined. These Gramians can be shown to satisfy some projected generalized discrete-time periodic Lyapunov equations. We propose a numerical method for solving these projected Lyapunov equations, and give an illustrative numerical example. As an application of our results, the balanced realization of periodic descriptor systems is discussed.

Journal ArticleDOI
TL;DR: New pivoting strategies that combine numerical pivoting and perturbation techniques are developed that are numerically robust, that few steps of iterative refinement are required, and that the factorization is significantly faster than with previous methods.
Abstract: We consider the direct solution of sparse symmetric indefinite matrices. We develop new pivoting strategies that combine numerical pivoting and perturbation techniques. Then an iterative refinement process uses our approximate factorization to compute a solution. We show that our pivoting strategies are numerically robust, that few steps of iterative refinement are required, and that the factorization is significantly faster than with previous methods. Furthermore, we propose original approaches that are designed for parallel distributed factorization. A key point of our parallel implementation is the cheap and reliable estimation of the growth factor. This estimation is based on an approximation of the off-diagonal entries and does not require any supplementary messages.

Journal ArticleDOI
TL;DR: The theory behind the steady state analysis of large sparse Markov chains with a recently proposed class of multilevel methods using concepts from algebraic multigrid and iterative aggregation-disaggregation is investigated.
Abstract: This paper investigates the theory behind the steady state analysis of large sparse Markov chains with a recently proposed class of multilevel methods using concepts from algebraic multigrid and iterative aggregation-disaggregation. The motivation is to better understand the convergence characteristics of the class of multilevel methods and to have a clearer formulation that will aid their implementation. In doing this, restriction (or aggregation) and prolongation (or disaggregation) operators of multigrid are used, and the Kronecker-based approach for hierarchical Markovian models is employed, since it suggests a natural and compact definition of grids (or levels). However, the formalism used to describe the class of multilevel methods for large sparse Markov chains has no influence on the theoretical results derived.

Journal ArticleDOI
TL;DR: The results generalize and unify existing work, answer a number of open questions, and provide useful tools for structured backward error investigations.
Abstract: Given a class of structured matrices $\mathbb{S}$, we identify pairs of vectors $x,b$ for which there exists a matrix $A\in\mathbb{S}$ such that $Ax=b$, and we also characterize the set of all matrices $A\in\mathbb{S}$ mapping $x$ to $b$. The structured classes we consider are the Lie and Jordan algebras associated with orthosymmetric scalar products. These include (skew-)symmetric, (skew-)Hamiltonian, pseudo(skew-)Hermitian, persymmetric, and perskew-symmetric matrices. Structured mappings with extremal properties are also investigated. In particular, structured mappings of minimal rank are identified and shown to be unique when rank one is achieved. The structured mapping of minimal Frobenius norm is always unique, and explicit formulas for it and its norm are obtained. Finally the set of all structured mappings of minimal 2-norm is characterized. Our results generalize and unify existing work, answer a number of open questions, and provide useful tools for structured backward error investigations.

Journal ArticleDOI
TL;DR: This paper first considers the convergence of general stationary linear iterative methods for general singular consistent linear systems and shows that the convergence and the quotient convergence are equivalent.
Abstract: Recently, Lee et al. have published an interesting paper [SIAM J. Matrix Anal. Appl., 28 (2006), pp. 634-641] concerning the energy norm convergence of general stationary linear iterative methods for semidefinite linear systems. In this paper, we first consider the convergence of general stationary linear iterative methods for general singular consistent linear systems and show that the convergence and the quotient convergence are equivalent. Then we consider the convergence of general stationary iterative methods for the semidefinite systems and clarify some issues in Lee et al.'s paper.

Journal ArticleDOI
TL;DR: This paper introduces a Givens-weight representation for rank structured matrices, where the rank structure is defined by certain submatrices starting from the bottom left or upper right matrix corner being of low rank.
Abstract: In this paper we introduce a Givens-weight representation for rank structured matrices, where the rank structure is defined by certain submatrices starting from the bottom left or upper right matrix corner being of low rank. We proceed in two steps. First, we introduce a unitary-weight representation. This representation will be compared to the (block) quasiseparable representations introduced by P. Dewilde and A.-J. van der Veen [Time-varying Systems and Computations, Kluwer Academic Publishers, Boston, 1998]. More specifically, we show that our unitary-weight representations are theoretically equivalent to the so-called block quasiseparable representations in input or output normal form introduced by Dewilde and van der Veen [Time varying Systems and Computations, Kluwer Academic Publishers, Boston, 1998]. Next, we move from the unitary-weight to the Givens-weight representation. We then provide some basic algorithms for the unitary/Givens-weight representation, showing how to obtain such a representation for a dense matrix by means of numerical approximation. We also show how to “swap” the representation and how to reduce the number of parameters of the representation, whenever appropriate. As such, these results will become the basis for algorithms on unitary/Givens-weight representations to be described in subsequent papers.

Journal ArticleDOI
TL;DR: This paper shows how, based on the conjugate gradient method, to compute the covariance matrix of parameter estimates and confidence intervals for constrained parameter estimation problems as well as their derivatives.
Abstract: In this paper we show how, based on the conjugate gradient method, to compute the covariance matrix of parameter estimates and confidence intervals for constrained parameter estimation problems as well as their derivatives.

Journal ArticleDOI
TL;DR: This work proposes an approach based on a combination of the conjugate gradient method with Chebyshev filtering polynomials, applied only to a part of the spectrum of the coefficient matrix, as preconditioners that target some specific convergence properties of the conjunction gradient method.
Abstract: One of the most powerful iterative schemes for solving symmetric, positive definite linear systems is the conjugate gradient algorithm of Hestenes and Stiefel [J Res Nat Bur Standards, 49 (1952), pp 409-435], especially when it is combined with preconditioning (cf [P Concus, GH Golub, and DP O'Leary, in Proceedings of the Symposium on Sparse Matrix Computations, Argonne National Laboratory, 1975, Academic, New York, 1976]) In many applications, the solution of a sequence of equations with the same coefficient matrix is required We propose an approach based on a combination of the conjugate gradient method with Chebyshev filtering polynomials, applied only to a part of the spectrum of the coefficient matrix, as preconditioners that target some specific convergence properties of the conjugate gradient method We show that our preconditioner puts a large number of eigenvalues near one and do not degrade the distribution of the smallest ones This procedure enables us to construct a lower dimensional Krylov basis that is very rich with respect to the smallest eigenvalues and associated eigenvectors A major benefit of our method is that this information can then be exploited in a straightforward way to solve sequences of systems with little extra work We illustrate the performance of our method through numerical experiments on a set of linear systems