scispace - formally typeset
Open AccessJournal ArticleDOI

The principle of minimized iterations in the solution of the matrix eigenvalue problem

Walter Arnoldi
- 01 Jan 1951 - 
- Vol. 9, Iss: 1, pp 17-29
Reads0
Chats0
TLDR
In this paper, an interpretation of Dr. Cornelius Lanczos' iteration method, which he has named ''minimized iterations'' is discussed, expounding the method as applied to the solution of the characteristic matrix equations both in homogeneous and nonhomogeneous form.
Abstract
An interpretation of Dr. Cornelius Lanczos' iteration method, which he has named \"minimized iterations\", is discussed in this article, expounding the method as applied to the solution of the characteristic matrix equations both in homogeneous and nonhomogeneous form. This interpretation leads to a variation of the Lanczos procedure which may frequently be advantageous by virtue of reducing the volume of numerical work in practical applications. Both methods employ essentially the same algorithm, requiring the generation of a series of orthogonal functions through which a simple matrix equation of reduced order is established. The reduced matrix equation may be solved directly in terms of certain polynomial functions obtained in conjunction with the generated orthogonal functions, and the convergence of the solution may be observed as the order of the reduced matrix is successively increased with the order of the original matrix as a limit. The method of minimized iterations is recommended as a rapid means for determining a small number of the larger eigenvalues and modal columns of a large matrix and as a desirable alternative for various series expansions of the Fredholm problem. 1. The conventional iterative procedures. It is frequently required that real latent roots, or eigenvalues, and modal columns be determined for a real numerical matrix, u, of order, n, in the characteristic homogeneous equation,*

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Book

Iterative Methods for Sparse Linear Systems

Yousef Saad
TL;DR: This chapter discusses methods related to the normal equations of linear algebra, and some of the techniques used in this chapter were derived from previous chapters of this book.
Journal ArticleDOI

GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems

TL;DR: An iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace.
Book

Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods

TL;DR: In this book, which focuses on the use of iterative methods for solving large sparse systems of linear equations, templates are introduced to meet the needs of both the traditional user and the high-performance specialist.
Book

Iterative Methods for Linear and Nonlinear Equations

C. T. Kelley
TL;DR: Preface How to Get the Software How to get the Software Part I.
References
More filters
Journal ArticleDOI

An iteration method for the solution of the eigenvalue problem of linear differential and integral operators

TL;DR: In this article, a systematic method for finding the latent roots and principal axes of a matrix, without reducing the order of the matrix, has been proposed, which is characterized by a wide field of applicability and great accuracy, since the accumulation of rounding errors is avoided, through the process of minimized iterations.
Journal ArticleDOI

XII.—Studies in Practical Mathematics. I. The Evaluation, with Applications, of a Certain Triple Product Matrix

TL;DR: In this paper, a matrix product H * A -1 K, where A is square and non-singular is not zero, is computed for determinant multiplication, and the matrix H * is obtained from H by transposition, that is, by changing rows into columns.