scispace - formally typeset
Search or ask a question
Author

Cornelius Lanczos

Bio: Cornelius Lanczos is an academic researcher from Dublin Institute for Advanced Studies. The author has contributed to research in topics: Lanczos resampling & Einstein. The author has an hindex of 13, co-authored 18 publications receiving 7807 citations. Previous affiliations of Cornelius Lanczos include National Institute of Standards and Technology & Purdue University.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a systematic method for finding the latent roots and principal axes of a matrix, without reducing the order of the matrix, has been proposed, which is characterized by a wide field of applicability and great accuracy, since the accumulation of rounding errors is avoided, through the process of minimized iterations.
Abstract: The present investigation designs a systematic method for finding the latent roots and the principal axes of a matrix, without reducing the order of the matrix. It is characterized by a wide field of applicability and great accuracy, since the accumulation of rounding errors is avoided, through the process of \"minimized iterations\". Moreover, the method leads to a well convergent successive approximation procedure by which the solution of integral equations of the Fredholm type and the solution of the eigenvalue problem of linear differential and integral operators may be accomplished.

3,947 citations

Book
15 Dec 1949

2,083 citations

Journal ArticleDOI
TL;DR: In this article, the authors adopt the general principles of the previous investigation to the specific demands that arise if we are not interested in the complete analysis of a matrix but only in the more special problem of obtaining the solution of a given set of linear equations.
Abstract: In an earlier publication [14] a method was described which generated the eigenvalues and eigenvectors of a matrix by a successive algorithm based on minimizations by least squares. The advantage of this method consists in the fact that the successive iterations are constantly employed with maximum efficiency which guarantees fastest convergence for a given number of iterations. Moreover, with the proper care the accumulation of rounding errors can be avoided. The resulting high precision is of great advantage if the separation of closely bunched eigenvalues and eigenvectors is demanded [16]. It was pointed out in [14, p. 256] that the inversion of a matrix, and thus the solution of simultaneous systems of linear equations, is contained in the general procedure as a special case. However, in view of the great importance associated with the solution of large systems of linear equations, this problem deserved more than passing attention. It is the purpose of the present discussion to adopt the general principles of the previous investigation to the specific demands that arise if we are not interested in the complete analysis of a matrix but only in the more special problem of obtaining the solution of a given set of linear equations

737 citations

Book
01 Jan 1961
TL;DR: In this article, the Fourier series for differentiable functions of higher differentiability has been studied and an alternative method of estimation has been proposed for estimating the Gibbs oscillations of the finite Fourier expansion.
Abstract: Preface Bibliography 1. Interpolation. Introduction The Taylor expansion The finite Taylor series with the remainder term Interpolation by polynomials The remainder of Lagrangian interpolation formula Equidistant interpolation Local and global interpolation Interpolation by central differences Interpolation around the midpoint of the range The Laguerre polynomials Binomial expansions The decisive integral transform Binomial expansions of the hypergeometric type Recurrence relations The Laplace transform The Stirling expansion Operations with the Stirling functions An integral transform of the Fourier type Recurrence relations associated with the Stirling series Interpolation of the Fourier transform The general integral transform associated with the Stirling series Interpolation of the Bessel functions 2. Harmonic Analysis. Introduction The Fourier series for differentiable functions The remainder of the finite Fourier expansion Functions of higher differentiability An alternative method of estimation The Gibbs oscillations of the finite Fourier series The method of the Green's function Non-differentiable functions Dirac's delta function Smoothing of the Gibbs oscillations by Fejer's method The remainder of the arithmetic mean method Differentiation of the Fourier series The method of the sigma factors Local smoothing by integration Smoothing of the Gibbs oscillations by the sigma method Expansion of the delta function The triangular pulse Extension of the class of expandable functions Asymptotic relations for the sigma factors The method of trigonometric interpolation Error bounds for the trigonometric interpolation method Relation between equidistant trigonometric and polynomial interpolations The Fourier series in the curve fitting 3. Matrix Calculus. Introduction Rectangular matrices The basic rules of matrix calculus Principal axis transformation of a symmetric matrix Decomposition of a symmetric matrix Self-adjoint systems Arbitrary n x m systems Solvability of the general n x m system The fundamental decomposition theorem The natural inverse of a matrix General analysis of linear systems Error analysis of linear systems Classification of linear systems Solution of incomplete systems Over-determined systems The method of orthogonalisation The use of over-determined systems The method of successive orthogonalisation The bilinear identity Minimum property of the smallest eigenvalue 4. The Function Space. Introduction The viewpoint of pure and applied mathematics The language of geometry Metrical spaces of infinitely many dimensions The function as a vector The differential operator as a matrix The length of a vector The scalar product of two vectors The closeness of the algebraic approximation The adjoint operator The bilinear identity The extended Green's identity The adjoint boundary conditions Incomplete systems Over-determined systems Compatibility under inhomogeneous boundary conditions Green's identity in the realm of partial differential operators The fundamental field operations of vector analysis Solution of incomplete systems 5. The Green's Function. Introduction The role of the adjoint equation The role of Green's identity The delta function -- The existence of the Green's function Inhomogeneous boundary conditions The Green's vector Self-adjoint systems The calculus of variations The canonical equations of Hamilton The Hamiltonisation of partial operators The reciprocity theorem Self-adjoint problems Symmetry of the Green's function Reciprocity of the Green's vector The superposition principle of linear operators The Green's function in the realm of ordinary differential operators The change of boundary conditions The remainder of the Taylor series The remainder of the Lagrangian interpolation formula

552 citations

Book
01 Jan 1966

223 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A detailed description and comparison of algorithms for performing ab-initio quantum-mechanical calculations using pseudopotentials and a plane-wave basis set is presented in this article. But this is not a comparison of our algorithm with the one presented in this paper.

47,666 citations

Book
01 Apr 2003
TL;DR: This chapter discusses methods related to the normal equations of linear algebra, and some of the techniques used in this chapter were derived from previous chapters of this book.
Abstract: Preface 1. Background in linear algebra 2. Discretization of partial differential equations 3. Sparse matrices 4. Basic iterative methods 5. Projection methods 6. Krylov subspace methods Part I 7. Krylov subspace methods Part II 8. Methods related to the normal equations 9. Preconditioned iterations 10. Preconditioning techniques 11. Parallel implementations 12. Parallel preconditioners 13. Multigrid methods 14. Domain decomposition methods Bibliography Index.

13,484 citations

Journal ArticleDOI
TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.
Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

9,057 citations

Journal ArticleDOI
TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

8,432 citations

Journal ArticleDOI
TL;DR: An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns and it is shown that this method is a special case of a very general method which also includes Gaussian elimination.
Abstract: An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns. The solution is given in n steps. It is shown that this method is a special case of a very general method which also includes Gaussian elimination. These general algorithms are essentially algorithms for finding an n dimensional ellipsoid. Connections are made with the theory of orthogonal polynomials and continued fractions.

7,598 citations