scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Numerical aspects of gram-schmidt orthogonalization of vectors

01 Jul 1983-Linear Algebra and its Applications (North-Holland)-Iss: 1, pp 591-601
TL;DR: Several variants of Gram-Schmidt orthogonalization are reviewed from a numerical point of view in this paper, and it is shown that the classical and modified variants correspond to the Gauss-Jacobi and Gauss -Seidel iterations for linear systems.
About: This article is published in Linear Algebra and its Applications.The article was published on 1983-07-01 and is currently open access. It has received 73 citations till now. The article focuses on the topics: Orthogonalization & Gram–Schmidt process.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a new method for the iterative computation of a few extremal eigenvalues of a symmetric matrix and their associated eigenvectors is proposed, based on an old and almost unknown method of Jacobi.
Abstract: In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well.

900 citations

Journal ArticleDOI
TL;DR: A new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors is proposed that has improved convergence properties and that may be used for general matrices.
Abstract: In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well.

522 citations


Cites background from "Numerical aspects of gram-schmidt o..."

  • ...In such a situation, it may help to apply mod-GS more often [24]....

    [...]

Journal ArticleDOI
TL;DR: The classical and modified Gram-Schmidt (CGS) orthogonalization is one of the fundamental procedures in linear algebra as mentioned in this paper, and it is equivalent to the factorization AQ1R, where Q1∈Rm×n with orthonormal columns and R upper triangular.

242 citations

Book ChapterDOI

203 citations

Journal ArticleDOI
TL;DR: It will be shown how intermediate information obtained by the conjugate gradients (cg) algorithm can be used to solve f(A)x = b iteratively in an efficient way, for suitable functions f.

182 citations

References
More filters
Book
30 Nov 1961
TL;DR: In this article, the authors propose Matrix Methods for Parabolic Partial Differential Equations (PPDE) and estimate of Acceleration Parameters, and derive the solution of Elliptic Difference Equations.
Abstract: Matrix Properties and Concepts.- Nonnegative Matrices.- Basic Iterative Methods and Comparison Theorems.- Successive Overrelaxation Iterative Methods.- Semi-Iterative Methods.- Derivation and Solution of Elliptic Difference Equations.- Alternating-Direction Implicit Iterative Methods.- Matrix Methods for Parabolic Partial Differential Equations.- Estimation of Acceleration Parameters.

5,317 citations

Journal ArticleDOI
TL;DR: Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I ~QR is the most reliable algorithm when A is ill-conditioned.
Abstract: An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I~QR is the most reliable algorithm when A is ill-conditioned.

4,189 citations

Journal ArticleDOI
TL;DR: This paper considers stable numerical methods for handling linear least squares problems that frequently involve large quantities of data, and they are ill-conditioned by their very nature.
Abstract: A common problem in a Computer Laboratory is that of finding linear least squares solutions. These problems arise in a variety of areas and in a variety of contexts. Linear least squares problems are particularly difficult to solve because they frequently involve large quantities of data, and they are ill-conditioned by their very nature. In this paper, we shall consider stable numerical methods for handling these problems. Our basic tool is a matrix decomposition based on orthogonal Householder transformations.

764 citations

Journal ArticleDOI
TL;DR: This work was supported by Natural Sciences and Engineering Research Council of Canada Grant A8652, by the New Zealand Department of Scientific and Industrial Research, and by the Department of Energy under Contract DE-AT03-76ER72018.
Abstract: Received 4 June 1980; revised 23 September 1981, accepted 28 February 1982 This work was supported by Natural Sciences and Engineering Research Council of Canada Grant A8652, by the New Zealand Department of Scientific and Industrial Research; and by U S. National Science Foundation Grants MCS-7926009 and ECS-8012974, the Department of Energy under Contract AM03-76SF00326, PA No. DE-AT03-76ER72018, the Office of Naval Research under Contract N00014-75-C-0267, and the Army Research Office under Contract DAA29-79-C-0U0, Authors' addresses: C. C. Paige, School of Computer Science, McGill University, Montreal, Quebec, Canada H3A 2K6; M. A Saundem, Department of Operations Research, Stanford University, Stanford, CA 94305. Permmsion to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notme is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1982 ACM 0098-3500/82/0600-0[95 $00 75

745 citations