scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 1990"


Journal ArticleDOI
TL;DR: In this paper, it is shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph.
Abstract: The problem of computing a small vertex separator in a graph arises in the context of computing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is, shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors of path graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorith...

1,762 citations


Journal ArticleDOI
TL;DR: In this paper, the Laplacian matrix of a graph G = D(G) - A(G), where G is a graph and A is the adjacency matrix of vertices, is investigated.
Abstract: Let G be a graph. The Laplacian matrix $L(G) = D(G) - A(G)$ is the difference of the diagonal matrix of vertex degrees and the 0-1 adjacency matrix. Various aspects of the spectrum of $L(G)$ are investigated. Particular attention is given to multiplicities of integer eigenvalues and to the effect on the spectrum of various modifications of G.

602 citations


Journal ArticleDOI
TL;DR: The role of elimination trees in the direct solution of large sparse linear systems is examined and its relation to sparse Cholesky factorization is discussed.
Abstract: In this paper, the role of elimination trees in the direct solution of large sparse linear systems is examined. The notion of elimination trees is described and its relation to sparse Cholesky factorization is discussed. The use of elimination trees in the various phases of direct factorization are surveyed: in reordering, sparse storage schemes, symbolic factorization, numeric factorization, and different computing environments.

519 citations


Journal ArticleDOI
TL;DR: For compact Hilbert space operators A and B, the singular values of B and A were shown to be dominated by the singular value of 1/2/1/2 (AA + BB + BB ) as mentioned in this paper.
Abstract: For compact Hilbert space operators A and B, the singular values of $A^ * B$ are shown to be dominated by those of $\frac{1}{2}(AA^* + BB^* )$

176 citations


Journal ArticleDOI
TL;DR: The results presented begin with the observation that for many distributions of matrices, the matrix elements after the first few steps of elimination are approximately normally distributed.
Abstract: Gaussian elimination with partial pivoting is unstable in the worst case: the “growth factor” can be as large as $2^{n - 1} $, where n is the matrix dimension, resulting in a loss of $n - 1$ bits of precision. It is proposed that an average-case analysis can help explain why it is nevertheless stable in practice. The results presented begin with the observation that for many distributions of matrices, the matrix elements after the first few steps of elimination are approximately normally distributed. From here, with the aid of estimates from extreme value statistics, reasonably accurate predictions of the average magnitudes of elements, pivots, multipliers, and growth factors are derived. For various distributions of matrices with dimensions $n\leqq 1024$, the average growth factor (normalized by the standard deviation of the initial matrix elements) is within a few percent of $n^{2/3} $ for partial pivoting and approximately $n^{1/2} $ for complete pivoting. The average maximum element of the residual wi...

160 citations


Journal ArticleDOI
TL;DR: Solving Newton’s linear system using updated matrix factorizations or the (unpreconditioned) conjugate gradient iteration gives the most effective algorithms.
Abstract: Several variants of Newton’s method are used to obtain estimates of solution vectors and residual vectors for the linear model $Ax = b + e = b_{true} $ using an iteratively reweighted least squares criterion, which tends to diminish the influence of outliers compared with the standard least squares criterion. Algorithms appropriate for dense and sparse matrices are presented. Solving Newton’s linear system using updated matrix factorizations or the (unpreconditioned) conjugate gradient iteration gives the most effective algorithms. Four weighting functions are compared, and results are given for sparse well-conditioned and ill-conditioned problems.

153 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of minimizing the distance from a given symmetric matrix to the class of Euclidean distance matrices is treated, and the solution is obtained in closed form.
Abstract: Recent extensions of von Neumann’s alternating projection algorithm permit an effective numerical approach to certain least squares problems subject to side conditions. This paper treats the problem of minimizing the distance from a given symmetric matrix to the class of Euclidean distance matrices; in dimension $n = 3$ we obtain the solution in closed form.

99 citations


Journal ArticleDOI
TL;DR: In this paper, the smallest singular value of a dense triangular matrix is estimated one row or column at a time using a simple rational function, which can be interpreted as trying to approximate the secular equation with a simpler rational function.
Abstract: This paper introduces a new technique for estimating the smallest singular value, and hence the condition number, of a dense triangular matrix as it is generated one row or column at a time. It is also shown how this condition estimator can be interpreted as trying to approximate the secular equation with a simpler rational function. While one can construct examples where this estimator fails, numerical experiments demonstrate that despite its small computational cost, it produces reliable estimates. Also given is an example that shows the advantage of incorporating the incremental condition estimation strategy into the QR factorization algorithm with column pivoting to guard against near rank deficiency going unnoticed.

93 citations


Journal ArticleDOI
TL;DR: In this article, a wide variety of quadratic Lyapunov bounds are systematically developed and a unified treatment of several bounds developed previously for feedback control design is provided for robust stability and performance analysis.
Abstract: For a given asymptotically stable linear dynamic system it is often of interest to determine whether stability is preserved as the system varies within a specified class of uncertainties. If, in addition, there also exist associated performance measures (such as the steady-state variances of selected state variables), it is desirable to assess the worst-case performance over a class of plant variations. These are problems of robust stability and performance analysis. In the present paper, quadratic Lyapunov bounds used to obtain a simultaneous treatment of both robust stability and performance are considered. The approach is based on the construction of modified Lyapunov equations, which provide sufficient conditions for robust stability along with robust performance bounds. In this paper, a wide variety of quadratic Lyapunov bounds are systematically developed and a unified treatment of several bounds developed previously for feedback control design is provided.

92 citations


Journal ArticleDOI
TL;DR: First generalizations of balanced model reduction to the stochastic system approximation problem are presented and the ideas of principal components to the problem of approximating the information interface between two random vectors are generalized; this leads to three approximate stochastically realization methods based on singular value decomposi...
Abstract: The state of a linear system is an information interface between past inputs and future outputs, and system approximation (even identification) is essentially a problem of approximating a large-dimensional interface by a low-order partial state. Balanced Model Reduction [IEEE Trans. Automat. Control , 26 (1981), pp. 17–31], the Fujishige–Nagai–Sawaragi Model Reduction Algorithm [Internal. J. Control, 22 (1975), pp. 807–819], and the Principal Hankel Components Algorithm for system identification [Proc. 12th Asilomar Conference on Circuits Systems and Computers, Pacific Grove, CA, November 1978] approximate this input-output interface by its principal components. First generalizations of balanced model reduction to the stochastic system approximation problem are presented. Then the ideas of principal components to the problem of approximating the information interface between two random vectors are generalized; this leads to three approximate stochastic realization methods based on singular value decomposi...

80 citations


Journal ArticleDOI
TL;DR: A method is derived for computing a “stable” ordering of the points $\alpha _i $; it mimics the interchanges performed by Gaussian elimination with partial pivoting, using only $O(n^2)$ operations.
Abstract: A confluent Vandermonde-like matrix $P(\alpha _0 ,\alpha _1 , \cdots ,\alpha _n )$ is a generalisation of the confluent Vandermonde matrix in which the monomials are replaced by arbitrary polynomials. For the case where the polynomials satisfy a three-term recurrence relation algorithms for solving the systems $Px = b$ and $P^T a = f$ in $O(n^2 )$ operations are derived. Forward and backward error analyses that provide bounds for the relative error and the residual of the computed solution are given. The bounds reveal a rich variety of problem-dependent phenomena, including both good and bad stability properties and the possibility of Xextremely accurate solutions. To combat potential instability, a method is derived for computing a “stable” ordering of the points $\alpha _i $; it mimics the interchanges performed by Gaussian elimination with partial pivoting, using only $O(n^2)$ operations. The results of extensive numerical tests are summarised, and recommendations are given for how to use the fast algo...

Journal ArticleDOI
TL;DR: Calculations done on the Alliant FX/8 multiprocessing/vector computer indicate speedups of nine to ten.
Abstract: Parallel iterative methods are studied, and the focus is on linear algebraic systems whose matrix is symmetric and positive definite. The set of unknowns may be viewed as a union of subsets of unknowns (possibly with overlap). The parallel iteration matrix is then formed by a weighted sum of iteration matrices that are associated with splittings of the matrix corresponding to the blocks. When the blocks are from a matrix in dissection form, it can be shown under suitable conditions that the parallel algorithm is convergent. When the multisplitting version of successive over-relaxation (SOR) is used, the SOR parameter is required to be less than $\omega _0 < 2.0$. Calculations done on the Alliant FX/8 multiprocessing/vector computer indicate speedups of nine to ten.

Journal ArticleDOI
TL;DR: In this paper, it was shown that for any unitarily invariant norm on the space of n-by-n complex matrices, there is a constant constant for all the matrices in the space.
Abstract: The authors show that for any unitarily invariant norm $\| \cdot \|$ on $M_n $ (the space of n-by-n complex matrices) \[ (1)\qquad \| A^ * B \|^2 \leq \| A^* A \| \| B^* B \| \quad \text{for all}\,...

Journal ArticleDOI
TL;DR: In this article, a body of theory for centrohermitian and skew-centroid-hermitians is developed, and some basic results for these matrices, their spectral properties, and characterizations of linear transformations that preserve them are given.
Abstract: A body of theory for centrohermitian and skew-centrohermitian matrices is developed. Some basic results for these matrices, their spectral properties, and characterizations of linear transformations that preserve them are given.

Journal ArticleDOI
TL;DR: Efficient algorithms for computing triangular decompositions of Hermitian matrices with small displacement rank using hyperbolic Householder matrices are derived and an extension to the efficient factorization of indefinite systems is described.
Abstract: Efficient algorithms for computing triangular decompositions of Hermitian matrices with small displacement rank using hyperbolic Householder matrices are derived. These algorithms can be both vectorized and parallelized. Implementations along with performance results on an Alliant FX/80, Cray X-MP/48, and Cray-2 are discussed. The use of Householder-type transformations is shown to improve performance for problems with nontrivial displacement ranks. In special cases, the general algorithm reduces to the well-known Schur algorithm for factoring Toeplitz matrices and Elden’s algorithm for solvig structured regularization problems. It gives a Householder formulation to the class of algorithms based on hyperbolic rotations studied by Kailath, Lev-Ari, Chun, and their colleagues for Hermitian matrices with small displacement structure. In addition, an extension to the efficient factorization of indefinite systems is described.

Journal ArticleDOI
TL;DR: In this paper, the sensitivity of the algebraic structure of rectangular matrix pencils to perturbations in the coefficients is examined, and upper and lower bounds on the distance from a given pencil to one with a qualitatively different Kronecker structure are derived.
Abstract: The sensitivity of the algebraic (Kronecker) structure of rectangular matrix pencils to perturbations in the coefficients is examined. Eigenvalue perturbation bounds in the spirit of Bauer–Fike are used to develop computational upper and lower bounds on the distance from a given pencil to one with a qualitatively different Kronecker structure.

Journal ArticleDOI
TL;DR: In this paper, the backward error analysis, perturbation theory, and properties of the $LU$ factorization of a tridiagonal matrix were used to obtain the best bound available for the error.
Abstract: If $\hat x$ is the computed solution to a tridiagonal system $Ax = b$ obtained by Gaussian elimination, what is the “best” bound available for the error $x - \hat x$ and how can it be computed efficiently? This question is answered using backward error analysis, perturbation theory, and properties of the $LU$ factorization of A. For three practically important classes of tridiagonal matrix, those that are symmetric positive definite, totally nonnegative, or M-matrices, it is shown that $(A + E)\hat x = b$ where the backward error matrix E is small componentwise relative to A. For these classes of matrices the appropriate forward error bound involves Skeel’s condition number cond $(A,x)$, which, it is shown, can be computed exactly in $O(n)$ operations. For diagonally dominant tridiagonal A the same type of backward error result holds, and the author obtains a useful upper bound for cond $(A,x)$ that can be computed in $O(n)$ operations. Error bounds and their computation for general tridiagonal matrices a...

Journal ArticleDOI
TL;DR: In this paper, it was shown that for affine pseudomonotone mapping, the feasibility of the (linear) complementarily problem implies its solvability, which was proved earlier by Karamardian under a strict feasibility condition.
Abstract: In this article, it is shown that for an affine pseudomonotone mapping, the feasibility of the (linear) complementarily problem implies its solvability. A result of this type was proved earlier by Karamardian under a strict feasibility condition.

Journal ArticleDOI
TL;DR: A parallel algorithm for the direct LU factorization of general unsymmetric sparse matrices is presented, based on a new nondeterministic parallel pivot search that finds a compatible pivot set of size m, followed by a parallel rank-m update.
Abstract: A parallel algorithm for the direct LU factorization of general unsymmetric sparse matrices is presented. The algorithm D2 is based on a new nondeterministic parallel pivot search that finds a compatible pivot set${\bf S}$ of size m, followed by a parallel rank-m update. These two steps alternate until switching to dense matrix code or until the matrix is factored. The algorithm is based on a shared-memory multiple-instruction-multiple-data (MIMD) model and takes advantage of both concurrency and (gather-scatter) vectorization. The detection of parallelism due to sparsity is based on Markowitz’s strategy, an unsymmetric ordering method. As a result, D2 finds more potential parallelism for matrices with highly asymmetric nonzero patterns than algorithms that construct an elimination tree using a symmetric ordering method (minimum degree or nested dissection, for example) applied to the symmetric pattern of ${\bf A} + {\bf A}^{\bf T} $ or ${\bf A}^{\bf T} {\bf A}$. The pivot search exploits more parallelism...

Journal ArticleDOI
TL;DR: Numerical experiments demonstrate the reliability of this scheme in estimating the smallest singular value of a triangular factor matrix as the factor is generated one column or row at a time.
Abstract: Incremental condition estimation provides an estimate for the smallest singular value of a triangular matrix. In particular, it gives a running estimate of the smallest singular value of a triangular factor matrix as the factor is generated one column or row at a time. An incremental condition estimator for dense matrices was originally suggested by Bischof. In this paper this scheme is generalized to handle sparse triangular matrices, especially those that are factors of sparse matrices. Numerical experiments on a variety of matrices demonstrate the reliability of this scheme in estimating the smallest singular value. A partial description of its implementation in a sparse matrix factorization code further illustrates its practicality.

Journal ArticleDOI
TL;DR: A new algorithm for the computation of a pseudoperipheral node of a graph that accesses the adjacency structure of the sparse matrix in a regular pattern is presented and the application of this algorithm to reordering algorithms for the solution of sparse linear systems is discussed.
Abstract: A new algorithm for the computation of a pseudoperipheral node of a graph is presented, and the application of this algorithm to reordering algorithms for the solution of sparse linear systems is discussed. Numerical tests on large sparse matrix problems show the efficiency of the new algorithm. When used for some of the reordering algorithms for reducing the profile and bandwidth of a sparse matrix, the results obtained with the pseudoperipheral nodes of the new algorithm are comparable to the results obtained with the pseudoperipheral nodes produced by the SPARSPAK version of the Gibbs—Poole—Stockmeyer algorithm. The advantage of the new algorithm is that it accesses the adjacency structure of the sparse matrix in a regular pattern. Thus this algorithm is much more suitable both for a parallel and for an out-of-core implementation of the ordering phase for sparse matrix problems.

Journal ArticleDOI
TL;DR: In this paper, a map that associates with each matrix pencil another matrix pencil in canonical form for the strict equivalence of pencils is defined, and the pencils where this map is continuous are characterized.
Abstract: In this paper a map that associates with each matrix pencil another matrix pencil in canonical form for the strict equivalence of pencils (Kronecker canonical form) is defined. Then the pencils where this map is continuous are characterized.The continuity of the canonical form obtained for the equivalence of matrix quadruples and triples from the Kronecker canonical form of the corresponding pencils is studied.

Journal ArticleDOI
TL;DR: Experimental results suggest that random tie resolution yields a similar fill-in of $\theta (n\log n)$ as well as the minimum degree ordering for Gaussian elimination.
Abstract: The minimum degree ordering for Gaussian elimination is considered. A way to resolve ties that results in a fill-in of $\theta (n^{\log _3 4} )$ for $n \times n$ matrices whose zero/nonzero structure corresponds to a torus graph with an optimal fill-in of $\theta (n\log n)$ is exhibited. Experimental results suggest that random tie resolution yields a similar fill-in.

Journal ArticleDOI
TL;DR: The Lagrange multiplier form given above results from block Gaussian elimination on the $2 \times 2$ block matrix system for the constrained minimization form as discussed by the authors, where A is generally some symmetric positive-definite matrix associated with the minimization problem.
Abstract: Equations of equilibrium arise in numerous areas of engineering. Applications to electrical networks, structures, and fluid flow are elegantly described in Introduction to Applied Mathematics, Wellesley Cambridge Press, Wellesley, MA, 1986 by Strang. The context in which equilibrium equations arise may be stated in two forms:Constrained Minimization Form: $\min(x^T Ax - 2x^T r)\,{\text{subject to}}\,Ex = s$,Lagrange Multiplier Form: $EA^{ - 1} E^T \lambda = s - EA^{ - 1} r\,{\text{and}}\,Ax = r - E^T \lambda $.The Lagrange multiplier form given above results from block Gaussian elimination on the $2 \times 2$ block matrix system for the constrained minimization form. Here A is generally some symmetric positive-definite matrix associated with the minimization problem. For example, A can be the element flexibility matrix in the structures application. An important approach (called the force method in structural optimization) to the solution to such problems involves dimension reduction nullspace schemes bas...

Journal ArticleDOI
TL;DR: In this paper, the evolution of the spectrum of Tn as the parameter t = tn 1 varies over (1, 1) was studied and it was shown that the eigenvalues associated with symmetric (reciprocal) eigenvectors are strictly increasing functions of t, while those associated with the skew-symmetric (anti-rewarded) eigvectors were strictly decreasing.
Abstract: Let Tn = (ti j) n=1 (n � 3) be a real symmetric Toeplitz matrix such that Tn 1 and Tn 2 have no eigenvalues in common. We consider the evolution of the spectrum of Tn as the parameter t = tn 1 varies over (1 ,1). It is shown that the eigenvalues of Tn associated with symmetric (reciprocal) eigenvectors are strictly increasing functions of t, while those associated with the skew-symmetric (anti-reciprocal) eigenvectors are strictly decreasing. Results are obtained on the asymptotic behavior of the eigenvalues and eigenvectors as t ! ±1, and on the possible orderings of eigenvalues associated with symmetric and skew-symmetric eigenvectors.

Journal ArticleDOI
TL;DR: In this paper it is shown how Rutishauser’s approach can be generalized to yield large families of flows in a natural manner and the flows derived include continuous analogues of the $LR$, $QR$, $SR$, and $HR$ algorithms.
Abstract: Certain variants of the Toda flow are continuous analogues of the $QR$ algorithm and other algorithms for calculating eigenvalues of matrices. This was a remarkable discovery of the early eighties. Until very recently contemporary researchers studying this circle of ideas have been unaware that continuous analogues of the quotient-difference and $LR$ algorithms were already known to Rutishauser in the fifties. Rutishauser’s continuous analogue of the quotient-difference algorithm contains the finite, nonperiodic Toda flow as a special case. A nice feature of Rutishauser’s approach is that it leads from the (discrete) eigenvalue algorithm to the (continuous) flow by a limiting process. Thus the connection between the algorithm and the flow does not come as a surprise. In this paper it is shown how Rutishauser’s approach can be generalized to yield large families of flows in a natural manner. The flows derived include continuous analogues of the $LR$, $QR$, $SR$, and $HR$ algorithms.

Journal ArticleDOI
TL;DR: In this article, a body of theory for perhermitian and skew-perhermitians is developed, and some basic results for these matrices, their spectral properties, and characterizations of linear transformations that preserve them are given.
Abstract: A body of theory for perhermitian and skew-perhermitian matrices is developed. Some basic results for these matrices, their spectral properties, and characterizations of linear transformations that preserve them are given.

Journal ArticleDOI
TL;DR: Choudhury and Horn made a conjecture concerning conditions for a complex matrix to admit a decomposition as a product of an orthogonal matrix and a symmetric matrix in this article.
Abstract: Choudhury and Horn made a conjecture concerning conditions for a complex matrix to admit a decomposition as a product of an orthogonal matrix and a symmetric matrix. This conjecture, in a stronger form, is confirmed.

Journal ArticleDOI
TL;DR: A class of preconditioners for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure is presented, designed to be both rapidly convergent and highly parallelizable.
Abstract: A class of preconditioners for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure is presented. These preconditioners are designed to be both rapidly convergent and highly parallelizable. The digital filtering viewpoint allows for the use of filter design techniques for constructing elliptic preconditioners and also provides an alternative framework for understanding several other recently proposed multilevel preconditioners. Numerical results are presented to assess the convergence behavior of the new methods and to compare them with other preconditioners of multilevel type, including the usual multigrid method as preconditioner, the hierarchical basis method, and a recent method proposed by Bramble–Pasciak–Xu.

Journal ArticleDOI
TL;DR: In this paper, the influence of structural modification on the dynamic behavior of a structure is investigated in the context of structural engineering, where the application chosen is related to the frequently encountered engineering problem of the structural modification.
Abstract: Suppose A and B are two $m \times m$ symmetric matrices. Let $C = A + B$. Some of C’s lowest eigenvalues together with their corresponding invariant subspace are bound in terms of B, a subspectrum of A, and an invariant subspace of A.An application demonstrating the usefulness of the presented theorems is given. The application chosen is related to the frequently encountered engineering problem of the influence of a structural modification on the dynamic behaviour of a structure.