scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Matrix Analysis and Applications in 1989"


Journal ArticleDOI
TL;DR: In this article, the TLS problem is generalized in order to maintain consistency of the parameter estimates in a general errors-in-variables model; i.e., some of the columns of A may be known exactly and the covariance matrix of the errors in the rows of the remaining data matrix may be...
Abstract: The Total Least Squares (TLS) method has been devised as a more global fitting technique than the ordinary least squares technique for solving overdetermined sets of linear equations $AX \approx B$ when errors occur in all data. This method, introduced into numerical analysis by Golub and Van Loan, is strongly based on the Singular Value Decomposition (SVD). If the errors in the measurements A and B are uncorrelated with zero mean and equal variance, TLS is able to compute a strongly consistent estimate of the true solution of the corresponding unperturbed set $A_0 X = B_0 $. In the statistical literature, these coefficients are called the parameters of a classical errors-in-variables model.In this paper, the TLS problem, as well as the TLS computations, are generalized in order to maintain consistency of the parameter estimates in a general errors-in-variables model; i.e., some of the columns of A may be known exactly and the covariance matrix of the errors in the rows of the remaining data matrix may be...

197 citations


Journal ArticleDOI
TL;DR: In this article, a sensitivity theory based on Frechet derivatives is presented that has both theoretical and computational advantages, and two norm-estimation procedures are given; the first is based on a finite-difference approximation of the Frechet derivative and costs only two extra function evaluations.
Abstract: A sensitivity theory based on Frechet derivatives is presented that has both theoretical and computational advantages. Theoretical results such as a generalization of Van Loan’s work on the matrix exponential are easily obtained: matrix functions are least sensitive at normal matrices. Computationally, the central problem is to estimate the norm of the Frechet derivative, since this is equal to the function’s condition number. Two norm-estimation procedures are given; the first is based on a finite-difference approximation of the Frechet derivative and costs only two extra function evaluations. The second method was developed specifically for the exponential and logarithmic functions; it is based on a trapezoidal approximation scheme suggested by the chain rule for the identity $e^X = ( e^{X/2^n } )^{2^n } $. This results in an infinite sequence of coupled Sylvester equations that, when truncated, is uniquely suited to the “scaling and squaring” procedure for $e^X $ or the “inverse scaling and squaring” p...

130 citations


Journal ArticleDOI
TL;DR: An iterative procedure is proposed for computing the eigenvalues and eigenvectors of Hermitian Toeplitz matrices, finding the computational cost per eigenvalue-eigenvector for a matrix of order n is $O(n^2 )$ in serial mode.
Abstract: An iterative procedure is proposed for computing the eigenvalues and eigenvectors of Hermitian Toeplitz matrices. The computational cost per eigenvalue-eigenvector for a matrix of order n is $O(n^2 )$ in serial mode. Results of numerical experiments on Kac–Murdock–Szego matrices and randomly generated real symmetric Toeplitz matrices of orders 100, 150, 300, 500, and 1,000 are included.

87 citations


Journal ArticleDOI
TL;DR: A detailed account of determinantal formulas in a graph-theoretic form involving paths and cycles in the digraph of the matrix is given in this article, where the determinant can be computed using the matrix representation.
Abstract: A detailed account of various determinantal formulas is presented in a graph-theoretic form involving paths and cycles in the digraph of the matrix. For cases in which the digraph has special local properties, for example, a cutpoint or a bridge, particular formulas are given that are more efficient for computing the determinant than simply using the matrix representation. Applications are also given to characteristic deter- minants, general minors, and cofactors.

83 citations


Journal ArticleDOI
TL;DR: It is shown that large element growth does not necessarily lead to a large backward error in the solution of a particular linear system, and the practical implications of this result are commented on.
Abstract: The growth factor plays an important role in the error analysis of Gaussian elimination. It is well known that when partial pivoting or complete pivoting is used the growth factor is usually small, but it can be large. The examples of large growth usually quoted involve contrived matrices that are unlikely to occur in practice. We present real and complex $n \times n$ matrices arising from practical applications that, for any pivoting strategy, yield growth factors bounded below by $n / 2$ and $n$, respectively. These matrices enable us to improve the known lower bounds on the largest possible growth factor in the case of complete pivoting. For partial pivoting, we classify the set of real matrices for which the growth factor is $2^{n - 1} $. Finally, we show that large element growth does not necessarily lead to a large backward error in the solution of a particular linear system, and we comment on the practical implications of this result.

71 citations


Journal ArticleDOI
TL;DR: In this article, an efficient generalized cross-validation function for the generalized regularization/smoothing problem is presented. But the problem is not solved in a reproducing kernel Hilbert space.
Abstract: An efficient algorithm for computing the generalized cross-validation function for the general cross-validated regularization/smoothing problem is provided. This algorithm is appropriate for problems where no natural structure is available, and the regularization/smoothing problem is solved (exactly) in a reproducing kernel Hilbert space. It is particularly appropriate for certain multivariate smoothing problems with irregularly spaced data, and certain remote sensing problems, such as those that occur in meteorology, where the sensors are arranged irregularly. The algorithm is applied to the fitting of interaction spline models with irregularly spaced data and two smoothing parameters; favorable timing results are presented. The algorithm may be extended to the computation of certain generalized maximum likelihood (GML) functions. Application of the GML algorithm to a problem in numerical weather forecasting, and to a broad class of hypothesis testing problems, is noted.

67 citations


Journal ArticleDOI
TL;DR: It is shown that preweighting is in some cases more desirable than the traditional postweighting in a variety of other weighting schemes.
Abstract: Parallel algorithms generated by multisplittings are considered. A parallel algorithm may be formed, first, by concurrently executing the iteration associated with each splitting, and second, by forming a weighted sum of these computations. However, it is not imperative that the weighting be done last. Convergence results are obtained for a variety of other weighting schemes. In particular, it is shown that preweighting is in some cases more desirable than the traditional postweighting. Furthermore, we indicate how one can use a symmetric weighting scheme to obtain a good multisplitting version of the SSOR preconditioner. These algorithms are illustrated by computations done on an Alliant $\text{FX} /8$ .

66 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of spectral and inner-outer factorization via state-space methods when the matrix to be factored is real-rational and surjective on the extended maginary axis.
Abstract: Spectral factorization and inner-outer factorization are basic techniques in treating many problems in electrical engineering. In this paper, the roblems of doing spectral and inner-outer factorizations via state-space methods are studied when the matrix to be factored is real-rational and surjective on the extended maginary axis. It is shown that our factorization problems can be reduced to solving a certain constrained Riccati equation, and that by examining some invariant ubspace of the associated Hamiltonian matrix there exists a unique solution to this equation. Finally, a state-space procedure to perform the factorization is proposed.

57 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that cyclic wavefront orderings can be obtained from the cyclic by rows ordering through convergence preserving combinatorial transformations, and the notions of weak equivalence and P-equivalence of cyclic Jacobi orderings were introduced.
Abstract: Convergence of the cyclic Jacobi method for diagonalizing a symmetric matrix has never been conclusively settled. Forsythe and Henrici [Trans. Amer. Math. Soc., 94(1960), pp. 1–23] proved convergence for a cyclic by rows ordering. Here orderings are investigated that can be obtained from the cyclic by rows ordering through convergence preserving combinatorial transformations. First the class of “cyclic wavefront” orderings is introduced and it is shown that the class consists of exactly those orderings that are “equivalent” to the cyclic by rows ordering. It is also shown that certain block Jacobi methods are cyclic wavefront orderings when viewed as cyclic Jacobi methods. While discussing convergence proofs for parallel implementations of Jacobi methods and block Jacobi methods, the notions of “weak equivalence” and “P-equivalence” of Jacobi orderings is developed. Next the class of “P-wavefront” orderings is introduced that includes all orderings related to the cyclic by rows ordering through any known ...

56 citations


Journal ArticleDOI
TL;DR: It is shown that any stable algorithm is also strongly stable on the same matrix class if the computed solution solves a nearby problem that is also symmetric or symmetric positive definite.
Abstract: An algorithm for solving linear equations is stable on the class of nonsingular symmetric matrices or on the class of symmetric positive definite matrices if the computed solution solves a system that is near the original problem. Here it is shown that any stable algorithm is also strongly stable on the same matrix class if the computed solution solves a nearby problem that is also symmetric or symmetric positive definite.

48 citations


Journal ArticleDOI
TL;DR: In this paper, a new derivation of the Leverrier-Fadeev algorithm for simultaneous determination of the adjoint and determinant of the $n \times n$ characteristic matrix is given.
Abstract: A new derivation is given of the Leverrier–Fadeev algorithm for simultaneous determination of the adjoint and determinant of the $n \times n$ characteristic matrix $\lambda I_n - A$. The proof uses an appropriate companion matrix and is of some interest in its own right. The method is extended to produce a corresponding scheme for the inverse of the polynomial matrix $\lambda ^2 I_n - \lambda A_1 - A_2 $, and indeed can be generalized for a regular polynomial matrix of arbitrary degree. The results.have application to linear control systems theory.

Journal ArticleDOI
TL;DR: An efficient and parallelizable decomposition method is presented, referred to as the SAS domain decomposition Method, for orthotropic elasticity problems with symmetrical domain and boundary conditions, which takes advantage of the symmetry of a given problem and decomposes the whole domain of the original problem into independent subdomains.
Abstract: The construction of an efficient numerical scheme for three-dimensional elasticity problems depends not only on understanding the nature of the physical problem involved, but also on exploiting special properties associated with its discretized system and incorporating these properties into the numerical algorithm. In this paper an efficient and parallelizable decomposition method is presented, referred to as the SAS domain decomposition method, for orthotropic elasticity problems with symmetrical domain and boundary conditions. Mathematically, this approach exploits important properties possessed by the special class of matrices A that satisfy the relation $A=PAP$, where P is some symmetrical signed permutation matrix. These matrices can be decomposed, via orthogonal transformations, into disjoint submatrices. Physically, the method takes advantage of the symmetry of a given problem and decomposes the whole domain of the original problem into independent subdomains. This method has potential for reducing...

Journal ArticleDOI
TL;DR: In this article, the extreme points of the closed cone of all positive semidefinite Hermitian matrices whose entry is zero whenever $i e j$ and $(i,j)$ is not in P are considered.
Abstract: Let P be a symmetric set of ordered pairs of integers from 1 to n, and define $M^ + (P)$ to be the closed cone of all positive semidefinite Hermitian matrices whose $(i,j)$ entry is zero whenever $i e j$ and $(i,j)$ is not in P. The extreme points of $M^ + (P)$ are considered. In some special cases, the maximum rank that such an extreme point can have is calculated.

Journal ArticleDOI
TL;DR: In this article, the block Kronecker product and block norm matrix are generalized to block-partitioned matrices, and robustness analysis is derived by deriving in simplified fashion a recent result.
Abstract: Complex and large-scale systems are often viewed as collections of interacting subsystems. Properties of the overall system are then deduced from the properties of the individual subsystems and their interconnections. This analysis process for large-scale systems usually requires manipulating the matrix subblocks of block-partitioned matrices. Two tools that are useful in linear systems analysis are the Kronecker product and the matrix modulus $(| a_{ij} |)$. However, these tools are designed for matrices partitioned into their scalar elements. Thus, this paper defines and presents properties of the block Kronecker product and block norm matrix, generalizations of the Kronecker product and matrix modulus to block-partitioned matrices. The utility of the results is illustrated by deriving in simplified fashion a recent result in robustness analysis.

Journal ArticleDOI
TL;DR: In this paper, the problem of maximizing the minimum eigenvalue of a weighted sum of symmetric matrices when the Euclidean norm of the vector of weights is constrained to be unity is considered.
Abstract: The problem considered is that of maximizing, with respect to the weights, the minimum eigenvalue of a weighted sum of symmetric matrices when the Euclidean norm of the vector of weights is constrained to be unity. A procedure is given for determining the sign of the maximum of the minimum eigenvalue and for approximating the optimal weights arbitrarily accurately when that sign is positive or zero. Linear algebra, a conical hull representation of the set of $n \times n$ symmetric positive semidefinite matrices and convex programming are employed.

Journal ArticleDOI
TL;DR: In this article, a self-contained approach based on the Drazin generalized inverse is used to derive many basic results in discrete time, finite state Markov decision processes, including the average reward evaluation equations, Laurent series expansions, as well as the finite test for Blackwell optimality.
Abstract: A new self-contained approach based on the Drazin generalized inverse is used to derive many basic results in discrete time, finite state Markov decision processes. A product form representation for the transition matrix of a stationary policy gives new derivations of the average reward evaluation equations, Laurent series expansions, as well as the finite test for Blackwell optimality. This representation also suggests new computational methods.

Journal ArticleDOI
TL;DR: In this article, a parallel multisplitting iteration scheme for solving a non-singular linear system with a nonnegative inverse was proposed and it was shown that the iteration converges to the solution from any initial vector.
Abstract: O’Leary and White have suggested a parallel multisplitting iteration scheme for solving a non-singular linear system $Ax = b$. Among other things they have shown that when A has a nonnegative inverse and the multisplitting is weak regular, then the iteration converges to the solution from any initial vector. The extension of this result to the case where A is a singular M-matrix is discussed. Problems of solvability, consistency, and convergence arise and their resolution is considered.

Journal ArticleDOI
TL;DR: It is shown, by performing a backward error analysis, that one of the pole assignment algorithms, due to Petkov, Christov, and Konstantinov, is numerically stable.
Abstract: Of the six or so pole assignment algorithms currently available, several have been claimed to be numerically stable, but no proofs have been published to date. It is shown, by performing a backward error analysis, that one of these algorithms, due to Petkov, Christov, and Konstantinov [IEEE Trans. Automat. Control, AC–29 (1984), pp. 1045–10481 is numerically stable.

Journal ArticleDOI
TL;DR: In this paper, the directional derivatives of multiple eigenvalues of a symmetric eigenproblem analytically dependent on several parameters are given, and the result can be used to define the sensitivity of multiple Eigenvalues, and it is useful for investigating structural vibration design and control system design.
Abstract: This note is a continuation of the work in [J. Comput. Math., 6 (1988), pp. 28–38]. The directional derivatives of multiple eigenvalues of a symmetric eigenproblem analytically dependent on several parameters are given. The result can be used to define the sensitivity of multiple eigenvalues, and it is useful for investigating structural vibration design and control system design.

Journal ArticleDOI
TL;DR: In this article, the authors apply interpolation results for rational matrix functions with incomplete data to solve a problem of shifting a part of the poles and the zeros of a given rational matrix function, keeping the other poles and zeros unchanged.
Abstract: Recent interpolation results for rational matrix functions with incomplete data are applied to solve a problem of shifting a part of the poles and the zeros of a given rational matrix function, keeping the other poles and zeros unchanged and the McMillan degree as small as possible.

Journal ArticleDOI
TL;DR: In this paper, Rayleigh quotient iteration is used for finding partial eigensolutions of symmetric tridiagonal matrices with results that compare favorably with the EISPACK routine TSTURM.
Abstract: Rayleigh quotient iteration can often yield an eigenvalue-eigenvector pair of a positive-definite Hermitian problem in a very short time. The primary hindrance associated with its use as a regular computational tool lies with the difficulty of identifying and selecting the final regions of convergence. In this paper rigorous, accessible criteria for localizing Rayleigh quotient iteration to prespecified intervals of the spectrum are provided, as well as extensions to situations where only partial spectral information is available. An application for finding partial eigensolutions of symmetric tridiagonal matrices is given with results that compare very favorably with the EISPACK routine TSTURM.

Journal ArticleDOI
TL;DR: In this paper, the aggregation of input-output models is analyzed and three axioms are shown to characterize a simple functional form for aggregation; then the properties of the aggregated model are analyzed relative to the original model.
Abstract: The aggregation of input-output models is analyzed. Three axioms are shown to characterize a simple functional form for aggregation; then the properties of the aggregated model are analyzed relative to the original model. Since an input-output model is driven by a square substochastic matrix, these results can also be viewed as facts about abstract mappings involving substochastic matrices.

Journal ArticleDOI
TL;DR: In this article, it was shown that the weak majorization of singular values can be weakly majorized in complex matrices, and the result unifies several known results concerning majorization statements for singular values.
Abstract: Let $B,D_j ,E_j ,j = 1,2, \cdots ,k,$ be $n \times n$ complex matrices. It is shown that \[ \sigma \left( \sum_{j = 1}^k D_j BE_j^ * \right) \prec_w \sigma (B)\bullet\delta \] where $\delta $ is any vector with components $\delta _1 \geqq \cdots \geqq \delta _n $ that weakly majorizes both the following vectors: \[ \sigma \left( \sum D_j D_j^ * \right)^{1/2}\bullet \sigma \left( \sum E_j E_j^ * \right)^{1/2} \qquad \text{and}\qquad \sigma \left( \sum D_j^ * D_j \right)^{1/2}\bullet \sigma \left( \sum E_j^ * E_j \right)^{1/2} .\] Here $\sigma ( \cdot )$ denotes the vector of singular values arranged in nonincreasing order, $ \prec _w $ denotes weak majorization, and $\bullet$ indicates Schur (entrywise) multiplication. The result unifies several known results concerning majorization statements for singular values.

Journal ArticleDOI
TL;DR: In this article, a family of flows which are continuous analogues of the constant and variable shift $QR$ algorithms for the singular value decomposition problem is presented, and it is shown that certain of these flows interpolate the algorithm exactly.
Abstract: A family of flows which are continuous analogues of the constant and variable shift $QR$ algorithms for the singular value decomposition problem is presented, and it is shown that certain of these flows interpolate the $QR$ algorithm exactly. Here attention is not restricted to bidiagonal matrices; arbitrary rectangular matrices are considered.

Journal ArticleDOI
TL;DR: In this article, for an n-by-n matrix with a unique unit LU factorization, the relationship of the results to Gaussian elimination and sparse matrix analysis is discussed.
Abstract: For an n-by-n matrix $A = [ a_{ij} ]$ which has a unique unit $LU$ factorization $A = LU$ with $U = [ u_{ij} ]$, combinatorial circumstances are determined under which $u_{ij} = a_{ij} $ for a given pair $i\leqq j$ or for all $i < j$ (or all $i\leqq j$). Analogous results are stated for other triangular factorizations and for the $LU$ factorization of a principal submatrix of A. The relationship of the results to Gaussian elimination and sparse matrix analysis is discussed.

Journal ArticleDOI
TL;DR: It is known that a vector satisfying a certain exponentially large set of inequalities related to a matrix M will allow the solution of linear complementarity problems with matrix M in n steps as discussed by the authors.
Abstract: It is known that a vector satisfying a certain exponentially large set of inequalities related to a matrix M will allow the solution of linear complementarity problems with matrix M in n steps. It is shown that the set of such vectors, if nonempty, is the interior of a simplicial cone. The defining inequalities for this cone show that $M^T $ is hidden Minkowski if such a vector exists.

Journal ArticleDOI
TL;DR: In this paper, it was shown that a Hermitian matrix can be reduced to Hermitians canonical form by a complex orthogonal congruence, and a short proof was given showing that a nonsingular symmetric matrix and a hermitians matrix can simultaneously be simultaneously reduced to the identity matrix and Hermit's canonical form.
Abstract: It is shown that a Hermitian matrix can be reduced to a Hermitian canonical form by a complex orthogonal congruence As a consequence, a short proof is given showing that a nonsingular symmetric matrix and a Hermitian matrix can be simultaneously reduced to the identity matrix and a Hermitian canonical form by complex orthogonal T- and $ * $-congruences, respectively

Journal ArticleDOI
TL;DR: This paper shows how each of the rank 1 methods gives rise to a single-application rank 2 method, and involves a new Householder transformation technique designed to eliminate elements of two vectors at once using a rank 1 correction of the identity matrix.
Abstract: Gill, Golub, Murray, and Saunders have described five methods by which the Cholesky factors of a positive definite matrix may be updated when the matrix is subjected to a symmetric rank 1 modification. For a negative rank 1 update, a modification of one of their methods was given by Lawson and Hanson and analyzed by Bojanczyk, Brent, van Dooren, and de Hoog. In many minimization algorithms, symmetric rank 2 modifications are found.This paper shows how each of the rank 1 methods gives rise to a single-application rank 2 method. For some of the methods, this involves a new Householder transformation technique designed to eliminate elements of two vectors at once using a rank 1 correction of the identity matrix.The authors’ experiments on scalar, vector, and shared-memory multiple-instructions multiple-data machines show that it is more economical to perform rank 2 updates rather than two rank 1 updates. In their comparison, the authors do not consider pipelining two applications of the rank 1 algorithms, wh...

Journal ArticleDOI
TL;DR: In this article, two algorithms for the reconstruction of symmetric tridiagonal (not necessarily persymmetric ) matrices J with subdiagonal entries equal to one from their eigenvalues are established.
Abstract: Two algorithms for the reconstruction of symmetric tridiagonal (not necessarily persymmetric ) matrices J with subdiagonal entries equal to one from their eigenvalues are established. The first algorithm is an iteration method using orthogonal similarity transformations in the sense of an inverted Jacobi algorithm and is shown to be locally convergent. Since reconstruction problems are often rather ill-conditioned, the algorithm may be slow, but it gives good approximations $J'$ to J. $J'$ may be used as a starting value for the second algorithm, a Newton method iterating the characteristic polynomial of $J'$. Numerical examples demonstrate the convergence behavior, also for nonpersymmetric matrices J.

Journal ArticleDOI
TL;DR: In this article, a Rayleigh-Ritz refinement technique was proposed for accelerating the convergence of iterative procedures for computing the stationary distribution of a nearly uncoupled stochastic matrix.
Abstract: A Rayleigh–Ritz refinement technique is analyzed that is suitable for accelerating the convergence of iterative procedures for computing the stationary distribution of a nearly uncoupled stochastic matrix. In particular, for that case the error of the new approximation in terms of the previous error and the degree of coupling gets a special form. Cases where the refinement is promising are given as well. All the analysis requires the single assumption that the Markov chain under consideration is irreducible.