scispace - formally typeset
Search or ask a question

Showing papers on "QR decomposition published in 1985"


Journal ArticleDOI
TL;DR: This paper describes an iterative method for reducing a general matrix to upper triangular form by unitary similarity transformations, similar to Jacobi's method for the symmetric eigenvalue problem in that it uses plane rotations to annihilate off-diagonal elements.
Abstract: This paper describes an iterative method for reducing a general matrix to upper triangular form by unitary similarity transformations. The method is similar to Jacobi's method for the symmetric eigenvalue problem in that it uses plane rotations to annihilate off-diagonal elements, and when the matrix is Hermitian it reduces to a variant of Jacobi's method. Although the method cannot compete with the QR algorithm in serial implementation, it admits of a parallel implementation in which a double sweep of the matrix can be done in time proportional to the order of the matrix.

107 citations


Journal ArticleDOI
TL;DR: Stewart has given an algorithm that uses the LINPACK SVD algorithm together with a Jacobitype "clean-up" operation on a cross-product matrix, which is equally stable and fast but avoids the cross product matrix.
Abstract: If the columns of a matrix are orthonormal and it is partitioned into a 2-by-1 block matrix, then the singular value decompositions of the blocks are related. This is the essence of the "CS decomposition". The computation of these related SVD's requires some care. Stewart has given an algorithm that uses the LINPACK SVD algorithm together with a Jacobitype "clean-up" operation on a cross-product matrix. Our technique is equally stable and fast but avoids the cross product matrix. The simplicity of our technique makes it more amenable to parallel computation on systolic-type computer architectures. These developments are of interest because a good way to compute the generalized singular value decomposition of a matrix pair (A, B) is to compute the CS decomposition of a certain orthogonal column matrix related toA andB.

98 citations


Journal ArticleDOI
TL;DR: How Z can be obtained by updating an explicit QR factorization with Householder transformations is described and why the chosen form ofZ is convenient in certain methods for nonlinearly constrained optimization is indicated.
Abstract: Given a rectangular matrix A(x) that depends on the independent variables x, many constrained optimization methods involve computations with Z(x), a matrix whose columns form a basis for the null space of A/sup T/(x). When A is evaluated at a given point, it is well known that a suitable Z (satisfying A/sup T/Z = 0) can be obtained from standard matrix factorizations. However, Coleman and Sorensen have recently shown that standard orthogonal factorization methods may produce orthogonal bases that do not vary continuously with x; they also suggest several techniques for adapting these schemes so as to ensure continuity of Z in the neighborhood of a given point. This paper is an extension of an earlier note that defines the procedure for computing Z. Here, we first describe how Z can be obtained by updating an explicit QR factorization with Householder transformations. The properties of this representation of Z with respect to perturbations in A are discussed, including explicit bounds on the change in Z. We then introduce regularized Householder transformations, and show that their use implies continuity of the full matrix Q. The convergence of Z and Q under appropriate assumptions is then proved. Finally, we indicate why themore » chosen form of Z is convenient in certain methods for nonlinearly constrained optimization.« less

32 citations


Journal ArticleDOI
TL;DR: It is shown that computing the block LU decomposition of A is twice more efficient than computing the usual LU decompositon of A and the systolic solution of linear systems of matrix A is considered.
Abstract: After a brief discussion on systolic arrays for band matrix LU or QR decomposition, we introduce a new systolic array for the block 2x2 LU decompositon of a band matrix A. This array is an hexagonally connected systolic array whose efficiency is e = ½ although its hardware requirement is the same as the LU decomposition array of Kung and Leiserson [8]. In the last section we consider the systolic solution of linear systems of matrix A:: we show that computing the block LU decomposition of A is twice more efficient than computing the usual LU decomposition of A.

18 citations


Journal ArticleDOI
TL;DR: The performance of X'X, the QR decomposition, and the Singular Value Factorization is analyzed and alternative implementation strategies corresponding to vector building block, vector-matrix, and direct algorithms with explicit buffer management are developed.
Abstract: Most research in statistical databases has concentrated on retrieval, sampling, and aggregation type statistical queries. Data management issues associated with computational statistical operations have been ignored. As a first step towards integrating database management support of statistical operations, we have analyzed the performance of X'X, the QR decomposition, and the Singular Value Factorization. Alternative implementation strategies with respect to the relational and transposed storage organizations are developed. Implementation strategies corresponding to vector building block, vector-matrix, and direct algorithms with explicit buffer management are compared in terms of efficiency in performance.

16 citations


Journal ArticleDOI
TL;DR: A number of results on the distributions of least squares estimators and their associated t and F statistics are derived via the QR algorithm, given the initial QR transformation.
Abstract: A number of results on the distributions of least squares estimators and their associated t and F statistics are derived via the QR algorithm. Given the initial QR transformation, the proofs are simple and direct, and the transformation itself requires only a minimal understanding of matrix algebra.

13 citations


01 Jan 1985
TL;DR: It is shown that structural QR factorization is exact, and chordal graph recognition is shown to be in NC, the class of problems with fast (O(log('k) n) n)) parallel algorithms.
Abstract: When a matrix A is factored, using either a LU decomposition or a QR factorization, its nonzero structure will in general change. I have studied the problem of accurately predicting the changes in structure in the matrix. A framework for categorizing structure-predicting algorithms as "correct" or not is introduced. A combinatorial algorithm that cor- rectly predicts changes in structure is called exact. Previously sug- gested structure-predicting algorithms are shown not to be exact. Precise conditions are given under which structural Gaussian elimi- nation is exact. For a subclass of matrix structures, having the so called "strong Hall property", it is shown that structural QR factorization is exact. The problem of minimizing the filling in of zeros with nonzeros in the factor of the matrix is known to be NP-complete for LU decompo- sition. One suggested heuristic approach for this problem is Quotient Tree Partitioning. Three different measures of what constitutes a good partitioning are suggested, and for each measure the problem is shown to be NP-complete. An algorithm for finding a maximal Quotient Tree Partitioning is given, and shown to have a linear running time. The NP-completeness of minimizing fill in QR factorization is shown, and an algorithm for QR factorization, that avoids the problem of predicting a too large structure for the factored matrix, is given. Chordal graphs are shown to have separators of size O(SQRT.(m)) that can be found in linear time. And chordal graph recognition is shown to be in NC, the class of problems with fast (O(log('k) n)) parallel algorithms.

10 citations


Journal ArticleDOI
TL;DR: It is shown that the parallel arithmetic computational complexities of the Cholesky's and QR factorization of a matrix are upper bounded by 0(log2 n) steps and a new parallel method for QR factorized of a symmetric positive definite tridiagonal matrix is proposed.
Abstract: In this paper, it is shown that the parallel arithmetic computational complexities of the Cholesky's and QR factorization of a matrix are upper bounded by 0(log2 n) steps Also, a new parallel method for QR factorization of a symmetric positive definite tridiagonal matrix is proposed This method requires only 0(logn) steps using 0(n) processors

6 citations


Journal ArticleDOI
TL;DR: On the Distributed Array Processor (DAP), Householder reflections turn out to be faster than Givens rotations, to perform QR factorization of an m × n matrix, especially for m ⪢ n .

5 citations


Journal ArticleDOI
TL;DR: In this paper, a data matrix is reduced to triangular form by using orthogonal transformations and an analysis of variance can be constructed from the triangular reduction of the data matrix, which can then be used for inference on linear combinations of parameters of a linear model.
Abstract: Procedures are presented for reducing a data matrix to triangular form by using orthogonal transformations. It is shown how an analysis of variance can be constructed from the triangular reduction of the data matrix. Procedures for calculating sums of squares, degrees of freedom, and expected mean squares are presented. It is demonstrated that all statistics needed for inference on linear combinations of parameters of a linear model may be calculated from the triangular reduction of the data matrix.

5 citations


Journal ArticleDOI
TL;DR: The purpose is to show how to get a series of lower triangular matrices by alternate orthogonal-upper triangular decompositions in different dimensions and to prove the convergence of this series.
Abstract: A generalization of the QR algorithm proposed by Francis [2] for square matrices is introduced for the singular values decomposition of arbitrary rectangular matrices. Geometrically the algorithm means the subsequent orthogonalization of the image of orthonormal bases produced in the course of the iteration. Our purpose is to show how to get a series of lower triangular matrices by alternate orthogonal-upper triangular decompositions in different dimensions and to prove the convergence of this series.