scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The use of a refined error bound when updating eigenvalues of tridiagonals

01 Sep 1983-Linear Algebra and its Applications (North-Holland)-Vol. 68, pp 179-219
TL;DR: A robust Lanczos algorithm is presented which is fast, easy to understand, uses about 30 words of extra storage, and has a fairly short program.
About: This article is published in Linear Algebra and its Applications.The article was published on 1983-09-01 and is currently open access. It has received 41 citations till now. The article focuses on the topics: Lanczos approximation & Lanczos algorithm.
Citations
More filters
Journal ArticleDOI
TL;DR: A deflation procedure is introduced that is designed to improve the convergence of an implicitly restarted Arnoldi iteration for computing a few eigenvalues of a large matrix and implicitly deflates the converged approximations from the iteration.
Abstract: A deflation procedure is introduced that is designed to improve the convergence of an implicitly restarted Arnoldi iteration for computing a few eigenvalues of a large matrix. As the iteration progresses, the Ritz value approximations of the eigenvalues converge at different rates. A numerically stable scheme is introduced that implicitly deflates the converged approximations from the iteration. We present two forms of implicit deflation. The first, a locking operation, decouples converged Ritz values and associated vectors from the active part of the iteration. The second, a purging operation, removes unwanted but converged Ritz pairs. Convergence of the iteration is improved and a reduction in computational effort is also achieved. The deflation strategies make it possible to compute multiple or clustered eigenvalues with a single vector restart method. A block method is not required. These schemes are analyzed with respect to numerical stability, and computational results are presented.

654 citations

Book ChapterDOI
01 Jan 1989
TL;DR: In the first volume of Spin Labeling: Theory and Applicationsa chapter was written by one of us in which a detailed theory for the interpretation of ESR spectra of spin labels in the slow motional regime was presented and contained many such examples.
Abstract: In the first volume of Spin Labeling: Theory and Applicationsa chapter was written by one of us (J.H.F.) in which a detailed theory for the interpretation of ESR spectra of spin labels in the slow motional regime (Freed, 1976) was presented. The specific emphasis of that review was on the interpretation of nitroxide spin label spectra and contained many such examples. In the ensuing 13 years, there have been a number of important developments. First and foremost has been the development and implementation of powerful computational algorithms that have been specifically tailored for the solution of these types of problems (Moro and Freed, 1981; Vasavada et al., 1987). The use of these algorithms often leads to more than an order-of-magnitude reduction in computer time for the calculation of any given spectrum as well as a dramatic reduction in computer memory requirements. Concomitant with these improvements in computational methodology has been the revolution in the power and availability of computer hardware. Taken together, these improvements in hardware and software have made it possible to quickly and conveniently perform spectral calculations on small laboratory computers which formerly required the resources of a large mainframe computer. The increase in the available computing power has also made it feasible to incorporate more sophisticated models of molecular structure and dynamics into the line-shape calculation programs.

293 citations

Journal ArticleDOI
TL;DR: In this paper, the authors construct a substructure interface impedance operator and present a spectral analysis that demonstrates that the method of Craig and Bampton (CB) is the most natural component mode synthesis method.

84 citations

Journal ArticleDOI
TL;DR: In this article, a symmetric singular value decomposition A=Q∑QT is presented for the complex square matrix A and an algorithm for the computation of this decomposition is presented.

83 citations

References
More filters
Journal ArticleDOI
TL;DR: Parlett as discussed by the authors presents mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few.
Abstract: According to Parlett, 'Vibrations are everywhere, and so too are the eigenvalues associated with them. As mathematical models invade more and more disciplines, we can anticipate a demand for eigenvalue calculations in an ever richer variety of contexts.' Anyone who performs these calculations will welcome the reprinting of Parlett's book (originally published in 1980). In this unabridged, amended version, Parlett covers aspects of the problem that are not easily found elsewhere. The chapter titles convey the scope of the material succinctly. The aim of the book is to present mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few. The author explains why the selected information really matters and he is not shy about making judgments. The commentary is lively but the proofs are terse.

3,115 citations

Book
01 Jan 1980
TL;DR: Parlett as discussed by the authors presents mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few.
Abstract: According to Parlett, 'Vibrations are everywhere, and so too are the eigenvalues associated with them. As mathematical models invade more and more disciplines, we can anticipate a demand for eigenvalue calculations in an ever richer variety of contexts.' Anyone who performs these calculations will welcome the reprinting of Parlett's book (originally published in 1980). In this unabridged, amended version, Parlett covers aspects of the problem that are not easily found elsewhere. The chapter titles convey the scope of the material succinctly. The aim of the book is to present mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few. The author explains why the selected information really matters and he is not shy about making judgments. The commentary is lively but the proofs are terse.

3,022 citations

Journal ArticleDOI
TL;DR: An algorithm is presented for computing the eigensystem of the rank-one modification of a symmetric matrix with known eIGensystem and the explicit computation of the updated eigenvectors and the treatment of multiple eigenvalues.
Abstract: An algorithm is presented for computing the eigensystem of the rank-one modification of a symmetric matrix with known eigensystem. The explicit computation of the updated eigenvectors and the treatment of multiple eigenvalues are discussed. The sensitivity of the computed eigenvectors to errors in the updated eigenvalues is shown by a perturbation analysis.

504 citations

Journal ArticleDOI
TL;DR: It is shown how a modification called selective orthogonalization stifles the formation of duplicate eigenvectors without increasing the cost of a Lanczos step signifi'cantly.
Abstract: The simple Lanczos process is very effective for finding a few extreme eigenvalues of a large symmetric matrix along with the associated eigenvectors. Unfortunately, the process computes redundant copies of the outermost eigen- vectors and has to be used with some skill. In this paper it is shown how a modification called selective orthogonalization stifles the formation of duplicate eigenvectors without increasing the cost of a Lanczos step signifi'cantly. The degree of linear independence among the Lanczos vectors is controlled without

416 citations

Journal ArticleDOI
TL;DR: The method is shown to be stable and for a large class of matrices it is, asymptotically, faster by an order of magnitude than theQR method.
Abstract: A method is given for calculating the eigenvalues of a symmetric tridiagonal matrix The method is shown to be stable and for a large class of matrices it is, asymptotically, faster by an order of magnitude than theQR method

404 citations