scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Linear least squares solutions by householder transformations

01 Jun 1965-Numerische Mathematik (Springer Berlin Heidelberg)-Vol. 7, Iss: 3, pp 269-276
TL;DR: In this paper, the euclidean norm is unitarily invariant and a vector x is determined such that x is parallel b-Ax parallel = \parallel c - QAx parallel where c denotes the first n components of c.
Abstract: Let A be a given m×n real matrix with m≧n and of rank n and b a given vector. We wish to determine a vector x such that $$\parallel b - A\hat x\parallel = \min .$$ where ∥ … ∥ indicates the euclidean norm. Since the euclidean norm is unitarily invariant $$\parallel b - Ax\parallel = \parallel c - QAx\parallel $$ where c=Q b and Q T Q = I. We choose Q so that $$QA = R = {\left( {_{\dddot 0}^{\tilde R}} \right)_{\} (m - n) \times n}}$$ (1) and R is an upper triangular matrix. Clearly, $$\hat x = {\tilde R^{ - 1}}\tilde c$$ where c denotes the first n components of c.
Citations
More filters
Journal ArticleDOI
TL;DR: The decomposition of A is called the singular value decomposition (SVD) and the diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values.
Abstract: Let A be a real m×n matrix with m≧n. It is well known (cf. [4]) that $$A = U\sum {V^T}$$ (1) where $${U^T}U = {V^T}V = V{V^T} = {I_n}{\text{ and }}\sum {\text{ = diag(}}{\sigma _{\text{1}}}{\text{,}} \ldots {\text{,}}{\sigma _n}{\text{)}}{\text{.}}$$ The matrix U consists of n orthonormalized eigenvectors associated with the n largest eigenvalues of AA T , and the matrix V consists of the orthonormalized eigenvectors of A T A. The diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values. We shall assume that $${\sigma _1} \geqq {\sigma _2} \geqq \cdots \geqq {\sigma _n} \geqq 0.$$ Thus if rank(A)=r, σ r+1 = σ r+2=⋯=σ n = 0. The decomposition (1) is called the singular value decomposition (SVD).

3,036 citations

Journal ArticleDOI
TL;DR: Experimental results are given, which indicates that MGS gives $\theta_k$ with equal precision and fewer arithmetic operations than HT, however, HT gives principal vectors, which are orthogonal to working accuracy, which is not in general true for MGS.
Abstract: Assume that two subspaces F and G of unitary space are defined as the ranges (or nullspaces) of given rectangular matrices A and B. Accurate numerical methods are developed for computing the principal angles $\theta_k (F,G)$ and orthogonal sets of principal vectors $u_k\ \epsilon\ F$ and $v_k\ \epsilon\ G$, k = 1,2,..., q = dim(G) $\leq$ dim(F). An important application in statistics is computing the canonical correlations $\sigma_k\ = cos \theta_k$ between two sets of variates. A perturbation analysis shows that the condition number for $\theta_k$ essentially is max($\kappa (A),\kappa (B)$), where $\kappa$ denotes the condition number of a matrix. The algorithms are based on a preliminary QR-factorization of A and B (or $A^H$ and $B^H$), for which either the method of Householder transformations (HT) or the modified Gram-Schmidt method (MGS) is used. Then cos $\theta_k$ and sin $\theta_k$ are computed as the singular values of certain related matrices. Experimental results are given, which indicates that MGS gives $\theta_k$ with equal precision and fewer arithmetic operations than HT. However, HT gives principal vectors, which are orthogonal to working accuracy, which is not in general true for MGS. Finally the case when A and/or B are rank deficient is discussed.

763 citations

Journal ArticleDOI
TL;DR: Two algorithms are presented for computing rank-revealing QR factorizations that are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case.
Abstract: Given anm n matrixM withm > n, it is shown that there exists a permutation FI and an integer k such that the QR factorization MYI= Q(Ak ckBk) reveals the numerical rank of M: the k k upper-triangular matrix Ak is well conditioned, IlCkll2 is small, and Bk is linearly dependent on Ak with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case.

698 citations


Cites background or methods from "Linear least squares solutions by h..."

  • ...The Businger and Golub algorithm [8] [20] works well in practice, but there are examples where it fails to produce a factorization satisfying (4) (see Example in 2)....

    [...]

  • ...In 2 we review QR with column pivoting [8] [20] and the Chandrasekaran and Ipsen algorithm [13] for computing an RRQR factorization....

    [...]

  • ...Golub [20] introduced these factorizations.and, with Businger [8], developed the first algorithm (QR with column pivoting) for computing them....

    [...]

  • ...In 7 we show that the concept of a strong RRQR factorization is not completely new in that the QR factorizati0n given by the Businger and Golub algorithm [8] [20] satisfies (5) and (6) with q (k, n) and q2(k, n) functions that grow exponentially with k. Finally, in 8 we present some extensions of this work, including a version of Algorithm 5 that is nearly as fast as QR with column pivoting for most problems and takes O (mn2) floating-point operations in the worst case....

    [...]

  • ...QR with column pivoting [8] [20] is a modification of the ordinary QR algorithm....

    [...]

Journal ArticleDOI
TL;DR: Developments in the theory of linear least-squares estimation in the last thirty years or so are outlined and particular attention is paid to early mathematica[ work in the field and to more modern developments showing some of the many connections between least-Squares filtering and other fields.
Abstract: Developments in the theory of linear least-squares estimation in the last thirty years or so are outlined. Particular attention is paid to early mathematica[ work in the field and to more modern developments showing some of the many connections between least-squares filtering and other fields.

696 citations

Journal ArticleDOI
TL;DR: A survey of computational methods in linear algebra can be found in this article, where the authors discuss the means and methods of estimating the quality of numerical solution of computational problems, the generalized inverse of a matrix, the solution of systems with rectangular and poorly conditioned matrices, and more traditional questions such as algebraic eigenvalue problems and systems with a square matrix.
Abstract: The authors' survey paper is devoted to the present state of computational methods in linear algebra. Questions discussed are the means and methods of estimating the quality of numerical solution of computational problems, the generalized inverse of a matrix, the solution of systems with rectangular and poorly conditioned matrices, the inverse eigenvalue problem, and more traditional questions such as algebraic eigenvalue problems and the solution of systems with a square matrix (by direct and iterative methods).

667 citations

References
More filters
Journal ArticleDOI
TL;DR: This note points out that the same result can be obtained with fewer arithmetic operations, and, in particular, for inverting a square matrix of order N, at most 2(N-1) square roots are required.
Abstract: A method for the inversion of a nonsymmetric matrix has been in use at ORNL and has proved to be highly stable numerically but to require a rather large number of arithmetic operations, including a total of N(N-1)/2 square roots. This note points out that the same result can be obtained with fewer arithmetic operations, and, in particular, for inverting a square matrix of order N, at most 2(N-1) square roots are required. For N > 4, this is a savings of (N-4)(N-1)/4 square roots. (T.B.A.)

577 citations


"Linear least squares solutions by h..." refers methods in this paper

  • ...A very effective method to realize the decomposition (t) is via Householder transformations [ 1 ]....

    [...]

Journal ArticleDOI

50 citations


"Linear least squares solutions by h..." refers background in this paper

  • ...m. It can be shown, cf. [ 2 ], that p(k) is generated as follows:...

    [...]