scispace - formally typeset
Search or ask a question

Showing papers on "QR decomposition published in 1987"


Book
01 Jan 1987
TL;DR: General matrices Band matrices positive definite matrices Positive definite band matrices Symmetric Indefinite Matrices Triangular matrices Tridiagonal matrices The Cholesky decomposition The QR decomposition up to and including the singular value decomposition is studied.
Abstract: General matrices Band matrices Positive definite matrices Positive definite band matrices Symmetric Indefinite Matrices Triangular matrices Tridiagonal matrices The Cholesky decomposition The QR decomposition Updating QR and Cholesky decompositions The singular value decomposition References Basic linear algebra subprograms Timing data Program listings BLA Listings.

1,050 citations


Journal ArticleDOI
TL;DR: In this article, a new way to represent products of Householder matrices is given that makes a typical Householder matrix algorithm rich in matrix-matrix multiplication, which is very desirable in that matrixmatrix...
Abstract: A new way to represent products of Householder matrices is given that makes a typical Householder matrix algorithm rich in matrix-matrix multiplication. This is very desirable in that matrix-matrix...

257 citations


Journal ArticleDOI
TL;DR: A wide variety of techniques for estimating the condition number of a triangular matrix are surveyed, and recommendations concerning the use of the estimates in applications are made.
Abstract: We survey and compare a wide variety of techniques for estimating the condition number of a triangular matrix, and make recommendations concerning the use of the estimates in applications. Each of the methods is shown to bound the condition number; the bounds can broadly be categorised as upper bounds from matrix theory and lower bounds from heuristic or probabilistic algorithms. For each bound we examine by how much, at worst, it can overestimate or underestimate the condition number. Numerical experiments are presented in order to illustrate and compare the practical performance of the condition estimators.

175 citations


Journal ArticleDOI
TL;DR: An efficient algorithm is presented for selecting test points for use in applications such as calibration and fault diagnosis of electronic networks and a definition of testability based on the concept of minimum estimation error is introduced.
Abstract: An efficient algorithm is presented for selecting test points for use in applications such as calibration and fault diagnosis of electronic networks. The algorithm, based on QR factorization of the circuit sensitivity matrix, minimizes the prediction or estimation errors which result from random measurement error. A definition of testability based on the concept of minimum estimation error is also introduced. Practical examples are given.

103 citations


Journal ArticleDOI
TL;DR: It is shown that by adding a correction step using only single precision the authors get a method which under mild conditions is as accurate as the QR method.

63 citations


Journal ArticleDOI
TL;DR: An improvement of the Jacobi singular value decomposition algorithm is proposed in this article, where the matrix is first reduced to a triangular form and the row-cyclic strategy preserves the triangularity.
Abstract: An improvement of the Jacobi singular value decomposition algorithm is proposed The matrix is first reduced to a triangular form It is shown that the row-cyclic strategy preserves the triangularity Further improvements lie in the convergence properties It is shown that the method converges globally and a proof of the quadratic convergence is indicated as well The numerical experiments confirm these theoretical predictions Our method is about 2-3 times slower than the standard QR method but it almost reaches the latter if the matrix is diagonally dominant or of low rank

44 citations


Journal ArticleDOI
TL;DR: In this article, a way of applying Householder reflections is described which is competitive with or superior to the use of Givens rotations for sparse orthogonal decomposition.

32 citations


Proceedings ArticleDOI
06 Apr 1987
TL;DR: A more computationally efficient solution to the QR RLS problem that requires only O(N) computations per time update, when the input has the usual shift-invariant property, and the computation and implementation requirements are reduced by an order of magnitude.
Abstract: There has been considerable recent interest in QR factorization for recursive solution to the least-squares adaptive-filtering problem, mainly because of the good numerical properties of QR factorizations. Early work by Gentleman and Kung (1981) and McWhirter (1983) has produced triangular systolic arrays of N2/2 processors that solve the Recursive Least Squares (RLS) adaptive-filtering problem (where N is the size of the adaptive filter). Here, we introduce a more computationally efficient solution to the QR RLS problem that requires only O(N) computations per time update, when the input has the usual shift-invariant property. Thus, computation and implementation requirements are reduced by an order of magnitude. The new algorithms are based on a structure that is neither a transversal filter nor a lattice, but can be best characterized by a functionally equivalent set of parameters that represent the time-varying "least-squares frequency transforms" of the input sequences. Numerical stability can be insured by implementing computations as 2 × 2 orthogonal (Givens) rotations.

28 citations


Proceedings ArticleDOI
01 Jan 1987
TL;DR: Five new orderings for the Jacobi SVD algorithm are studied, and it is established that these orderings are also equivalent to the classical cyclic-by-rows ordering and thus share the same convergence properties.
Abstract: The singular value decomposition (SVD) of a matrix has important applications in data analysis and signal processing. Jacobi-type procedures are gaining wide acceptance for computing the SVD on parallel machines due to their inherent parallelism. But, since 1960, the only ordering that guarantees convergence for a Jacobi singular value decomposition (SVD) has been the cyclic-by-rows (columns) ordering. It was even conjectured that loss of theoretical convergence is a necessary price to pay for parallel computing. We focus on various Jacobi SVD algorithms that have been proposed in the literature, particularly on orderings for cyclic annihilation of elements of an $n \times n$ matrix A. We give a systematic description of the six well known orderings for parallel Jacobi implementations, and show that these orderings are equivalent to each other. We introduce a cyclic by modulus ordering and show that it is closely related to Sameh's parallel implementation of the row ordering. Then we consider two Jacobi SVD methods proposed for parallel computing by Brent et al. (with round-robin ordering) and Luk (with odd-even ordering). By showing their equivalence to the cyclic by modulus ordering and using the fact that equivalent orderings share the same convergence properties, we prove convergence of the method by Brent et al. for odd n and of the method by Luk for any n. We also present an example to show that the method used in Brent et al. does not converge in general for any even n. Triangular architectures are important for many applications, where cost and efficiency are primary considerations. We give a convergence proof for matrices with triangular structure, where a preparatory QR decomposition is assumed for the general case. Also of concern for applications, is the computational cost associated with doing certain operations. We present relaxed rotation angle approximations which reduce computational costs by eliminating a square root operation.

19 citations


Proceedings ArticleDOI
01 Apr 1987
TL;DR: The technique, which has previously been applied to adaptive lattice filters, is shown to be applicable to the matrix triangularization related problems such as solving general linear systems and computing eigenvalues by the QR algorithm.
Abstract: A technique is described which allows least squares computation to be made at arbitrarily high sampling rates, overcoming the inherent speed limitation due to the recursive algorithms. Previous efforts at high sampling rate systolic implementations of least squares problems have used Givens transformations and QR decomposition, achieving a sampling rate limited by the time required by several multiplication operations. Taking advantage of the linearity of the least squares recursion, the algorithms can be recast into a new realization for which the bound on throughput of least squares computation is arbitrarily high. The technique, which has previously been applied to adaptive lattice filters, is shown to be applicable to the matrix triangularization related problems such as solving general linear systems and computing eigenvalues by the QR algorithm.

14 citations


Journal ArticleDOI
TL;DR: Two particular implementations of the symmetric algorithm outperform both the modified Guam-Schmidt and the Hestenes-Sttefel algorithm and in most cases are superior in terms of accuracy to the QR algorithm.
Abstract: A large number of implementations of the symmetric algorithm in the ABS class for linear systems are compared on a set of ill-conditioned test problems, together with implementations of the modified Gbam-Schmidt, the Hestenes-Stiefei, and the QR algorithms. The results indicate the superiority of two particular implementations of the symmetric algorithm, which outperform both the modified Guam-Schmidt and the Hestenes-Sttefel algorithm and in most cases are superior in terms of accuracy to the QR algorithm.

Journal ArticleDOI
TL;DR: D. Sweet's clever QR decomposition algorithm for Toeplitz matrices is considered, and it requires only O ( n 2 ) flops to factor an n × n matrix.

Journal ArticleDOI
Linda Kaufman1
TL;DR: In this article, the generalized Householder transformation is discussed in the context of sparse problems, where the identity matrix is a rank k modification of the Householder transform designed to annihilate elements in k vectors simultaneously.

Proceedings Article
01 Dec 1987

Journal ArticleDOI
TL;DR: In this paper, the singular value decomposition (SVD) of an infinite block-Hankel matrix can be reduced to the SVD of a matrix with finite dimension using the coefficients of the transfer function matrix (TFM).
Abstract: This paper presents a first attempt to perform the singular value decomposition of an infinite block-Hankel matrix. It is shown that singular value decomposition (SVD) of an infinite block-Hankel matrix can be reduced to the SVD of a matrix with finite dimension. The formulation is solely in terms of the coefficients of the transfer function matrix (TFM).The most of importance is that this method leads to a new formulation of L∞-optimization problems, Hankel-norm model reduction problems and a new algorithm for minimal balanced realization of multivariable systems

ReportDOI
01 Sep 1987
TL;DR: By solving the sequence of problems, this paper is able to QR factor data matrices of the type usually associated with correlation, pre and post-windowed, and covariance methods of linear prediction, and all three algorithms generate generalized reflection coefficients that may be used for filtering or classification.
Abstract: : This paper poses a sequence of linear prediction problems that are a little different from those previously posed By solving the sequence of problems we are able to QR factor data matrices of the type usually associated with correlation, pre and post-windowed, and covariance methods of linear prediction Our solutions cover the forward, backward, and forward-backward problems The QR factor orthogonalizes the data matrix and solves the problem of Cholesky factoring the experimental correlation matrix and its inverse This means we may use generalize Levinson algorithms to derive generalized QR algorithms, which are then used to derive generalized Schur algorithms All three algorithms are true lattice algorithms that may be implemented either on a vector machine or on a multi-tier lattice, and all three algorithms generate generalized reflection coefficients that may be used for filtering or classification Keywords: Toeplitz matrices; Factorization; Covariance; Matrix theory

01 Sep 1987
TL;DR: It is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures with those of more classical statistical pro- cedures, such as stepwise regression.
Abstract: A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.

Proceedings Article
01 Dec 1987
TL;DR: This scheme performs exceptionally well for rank deficient matrices as well as for those rectangular matrices having clustered or multiple singular values, and may be well suited for applications such as real-time signal processing.
Abstract: We present a multiprocessor scheme for determining the singular value decomposition of rectangular matrices in which the number of rows is substantially large (or smaller) than the number of columns. In this scheme, we perform an initial QR factorization on the tall matrix (either A or A/sup T/) using a multiprocessor block Householder algorithm. We then use a parallel one-sided Jacobi scheme to orthogonalize the columns of the upper triangular matrix R to yield the factorization RV = U..sigma.., from which the desired singular value decomposition is obtained. Preliminary experiments on an Alliant FX/8 computer system with 8 processors indicate speedups near 5 for our scheme over an optimized implementation of the Linpack/Eispack routines which perform the classical bi-diagonalization technique. Our scheme performs exceptionally well for rank deficient matrices as well as for those rectangular matrices having clustered or multiple singular values, and may be well suited for applications such as real-time signal processing. We present performance results on the Alliant FX/8 and Cray X-MP/48 computer systems with particular emphasis on speedups obtained for our scheme over classical SVD algorithms.

Book ChapterDOI
01 Jan 1987
TL;DR: The results obtained indicate that the approach to parallelizing the QR factorization is competitive for very large problems, e.g., of the order 5000-by-1000.
Abstract: A statically scheduled parallel block QR factorization procedure is described. It is based on "block" Givens rotations and is modeled after the Gentleman-Kung systolic QR procedure. Independent tasks are associated with each block column. "Tallest possible" subproblems are always solved. The method has been implemented on the IBM Kingston LCAP-1 system, which consists of ten FPS-164/MAX array processors that can communicate through a large shared bulk memory. The implementation revealed much about the tradeoff between block size and load balancing. Large blocks make load balancing more difficult but give better 164/MAX performance and less shared memory traffic. The results obtained indicate that our approach to parallelizing the QR factorization is competitive for very large problems, e.g., of the order 5000-by-1000.

Proceedings ArticleDOI
13 Oct 1987
TL;DR: Two algorithms to carry out a QR factorization using the Givens transformation on both distributed-memory and shared-memory parallel computers are presented and implemented and evaluated in terms of speed-up time.
Abstract: Surface fitting allows for pose estimation and recognition of objects in range scenes. Unfortunately, surface fitting is computation-intensive. One way to speed-up this task is to use parallel processing, since its availability is on the increase. Mathematically, surface fitting can be formulated as an overdetermined linear system which can be solved in the least-square sense. Because of numerical stability and ease of implementation, the QR-factorization using the Givens transformation is best suited for the parallel solution of overdetermined systems. In this paper we present two algorithms to carry out a QR factorization on both distributed-memory and shared-memory parallel computers. These algorithms have been implemented and evaluated in terms of speed-up time.

Proceedings ArticleDOI
17 Aug 1987
TL;DR: In this paper, the singular value of QR decomposition is used to generate an orthonormal basis that spans admissible eigenvector space corresponding to each assigned eigen value, and the eigen vector set which best approximates the given matrix in the least-square sense and still satisfy eigenvalue cosntraints is determined.
Abstract: An improved method is developed for eigenvalues and eigenvectors placement of a closed-loop control system using either state or output feedback. The method basically consists of three steps. First, the singular value of QR decomposition is used to generate an orthonormal basis that spans admissible eigenvector space corresponding to each assigned eigenvalue. Secondly, given a unitary matrix, the eigenvector set which best approximates the given matrix in the least-square sense and still satisfy eigenvalue cosntraints is determined. Thirdly, a unitary matrix is sought to minimize the error between the unitary matrix and the assignable eigenvector matrix. For use as the desired eigenvector set, two matrices, namely, the open-loop eigenvector matrix and its closest unitary matrix are proposed. The latter matrix generally encourages both minimum conditioning and control gains. In addition, the algorithm is formulated in real arithmetic for efficient implementation. To illustrate the basic concepts, numerical examples are included.

01 Jun 1987
TL;DR: It is shown that if rotations are applied to the triangular matrix so as to leave the number of its zero entries invariant, the sines of the rotation angles are partial correlations.
Abstract: : The usual way of computing partial correlations is based on the formation of the covariance matrix, that amounts to squaring the data matrix, thus inviting a potential loss of numerical accuracy. This paper recommends the determination of partial correlations from the data matrix: the QR decomposition of the data matrix is computed and plane rotations are applied to the resulting upper triangular matrix, which is the Cholesky factor of the covariance matrix. It is shown that if rotations are applied to the triangular matrix so as to leave the number of its zero entries invariant, the sines of the rotation angles are partial correlations. Different ways of organizing the computations are presented for extracting any set of partial correlations.

DOI
01 Jan 1987
TL;DR: A new algorithm for computing the QR factorization of an mxn Toeplitz matrix in O(mn) multiplications is presented, exploiting the procedure for the rank-1 modification and the fact that successive columns of a ToePlitz matrix are related to each other.
Abstract: We present a new algorithm for computing the QR factorization of an mxn Toeplitz matrix in O(mn) multiplications. The algorithm exploits the procedure for the rank-1 modification and the fact that successive columns of a Toeplitz matrix are related to each other. Both matrices Q and R are generated column by column, starting for their first columns. Each column is calculated from the previous column after rank-1 modification to the matrix R and a step of Gramm-Schmidt orthogonalization process applied to two auxiliary vectors.

Journal ArticleDOI
TL;DR: In this article, a QR decomposition and the modification of the Gram-Schmidt method in least-squares analysis are presented, as well as a detailed description of the numerical calculation of the rotational constants.
Abstract: A QR decomposition and the modification of the Gram-Schmidt method in least-squares analysis are presented, as well as a detailed description of the numerical calculation of the rotational constants. The ordinary least-squares technique with the construction of normal equations and the corrected least-squares technique with QR decomposition are compared. These procedures were tested using well-defined synthetic data and the published microwave data on the CS2. molecule.

Journal ArticleDOI
TL;DR: In this paper, an efficient algorithm for computing the group inverse of a square, singular matrix, in factorized form, is introduced, based on the QR factorization with column pivoting and uses a technique of inversion by partitioning.
Abstract: An efficient algorithm is introduced for computing the group inverse of a square, singular matrix, in factorized form. The algorithm is based on the QR factorization with column pivoting and uses a technique of inversion by partitioning. The factorization is used to compute the group inverse solution of a singular system of equations. When only the solution vector is wanted, the group inverse does not need to be computed explicitly.

Proceedings ArticleDOI
01 Dec 1987
TL;DR: The Jordan form simplifies the determination or the Gramians while the minimality of the realizations assures the numerical stability of the Chclesky decomposition of both Gramians.
Abstract: A method involving Householder transformations is presented for obtaining a minimal Jordan form realization of a transfer function matrix from its partial fraction expansion. The Jordan form simplifies the determination or the Gramians while the minimality of the realizations assures the numerical stability of the Chclesky decomposition of both Gramians. Finally the matrix requiring SVD to complete the transformation to balanced form is simplified to an upper triangular matrix by performing the aforementioned Cholesky decomposition in a complementary manner.

01 Sep 1987
TL;DR: straightforward use of the QR factorization results in a realization scheme that possesses all of the computational advantages of Rissanen's realization scheme and it is demonstrated that column pivoting might be incorporated in this second scheme.
Abstract: The use of the QR factorization of the Hankel matrix in solving the partial realization problem is analyzed. Straightforward use of the QR factorization results in a realization scheme that possesses all of the computational advantages of Rissanen's realization scheme. These latter properties are computational efficiency, recursiveness, use of limited computer memory, and the realization of a system triplet having a condensed structure. Moreover, this scheme is robust when the order of the system corresponds to the rank of the Hankel matrix. When this latter condition is violated, an approximate realization could be determined via the QR factorization. In this second scheme, the given Hankel matrix is approximated by a low-rank non-Hankel matrix. Furthermore, it is demonstrated that column pivoting might be incorporated in this second scheme. The results presented are derived for a single input/single output system, but this does not seem to be a restriction.

Proceedings ArticleDOI
01 Jan 1987
TL;DR: In this paper, a new algorithm for solving subset regression problems is described, which performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters.
Abstract: A new algorithm for solving subset regression problems is described. The algorithm performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters. This, in combination with a number of extensions of the new technique, makes the method a very flexible tool for analyzing subset regression problems in which the parameters have a physical meaning.