scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 1983"


Journal ArticleDOI

75 citations


Book ChapterDOI
01 Jan 1983
TL;DR: A unified approach to several methods for computing eigenvalues and eigenvectors of large sparse matrices, and includes the most commonly used algorithms for solving large sparse eigenproblems like the Lanczos algorithm, Arnoldi's method and the subspace iteration.
Abstract: We present a unified approach to several methods for computing eigenvalues and eigenvectors of large sparse matrices. The methods considered are projection methods, i.e. Galerkin type methods, and include the most commonly used algorithms for solving large sparse eigenproblems like the Lanczos algorithm, Arnoldi's method and the subspace iteration. We first derive some a priori error bounds for general projection methods, in terms of the distance of the exact eigenvector from the subspace of approximation. Then this distance is estimated for some typical methods, particularly those for unsymmetric problems.

63 citations


Book
01 Jan 1983
TL;DR: This dissertation considers two combinatorial problems arising in large-scale, sparse optimizaton: the problem of approximating the Hessian matrix of a smooth, non-linear function by finite differencing and to find as sparse a representation as possible of a given set of linear constraints.
Abstract: : This dissertation considers two combinatorial problems arising in large-scale, sparse optimizaton. The first is the problem of approximating the Hessian matrix of a smooth, non-linear function by finite differencing, where the object is to minimize the required number of gradient evaluations. The second is to find as sparse a representation as possible of a given set of linear constraints.

60 citations


Journal ArticleDOI
TL;DR: Numerical experiments show that the method of normal equations should be considered when the observation matrix is sparse and well conditioned, and for ill-conditioned problems, the algorithm based on Givens rotations is preferable.
Abstract: The method of normal equations, the Peters–Wilkinson algorithm and an algorithm based on Givens rotations for solving large sparse linear least squares problems are discussed and compared. Numerical experiments show that the method of normal equations should be considered when the observation matrix is sparse and well conditioned. For ill-conditioned problems, the algorithm based on Givens rotations is preferable.

30 citations



Journal ArticleDOI
TL;DR: The FORTRAN implementation of an efficient algorithm which solves the Assignment Problem for sparse matrices is given and results are presented, showing the proposed method to be generally superior to the best known algorithms.
Abstract: The FORTRAN implementation of an efficient algorithm which solves the Assignment Problem for sparse matrices is given. Computional results are presented, showing the proposed method to be generally superior to the best known algorithms.

17 citations


Journal ArticleDOI
TL;DR: The numerical implementation of normalized factorization procedures for the solution of large sparse linear finite element systems is presented here and FORTRAN subroutines for the efficient solution of the resulting large sparse symmetric linear systems of algebraic equations are given.

8 citations


Journal ArticleDOI
TL;DR: A new quasi-Newton updating formula for sparse optimization calculations is presented that makes combined use of a simple strategy for fixing symmetry and a Schubert correction to the upper triangle of a permuted Hessian approximation.
Abstract: A new quasi-Newton updating formula for sparse optimization calculations is presented. It makes combined use of a simple strategy for fixing symmetry and a Schubert correction to the upper triangle of a permuted Hessian approximation. Interesting properties of this new update are that it is closed form and that it does not satisfy the secant condition at every iteration of the calculations. Some numerical results are given that show that this update compares favorably with the sparse PSB update and appears to have a superlinear rate of convergence.

8 citations


Journal ArticleDOI
J. M. McNamee1
TL;DR: E i g h t F O R T R A N s u b r o u t i n e s a r e p r e s e n t e d for m u l t i p l y i n g a n d a d d p a i r s o f s p a r s e m a t r i c e s in spec i a l cases.
Abstract: E i g h t F O R T R A N s u b r o u t i n e s a r e p r e s e n t e d for m u l t i p l y i n g a n d a d d i n g p a i r s o f s p a r s e m a t r i c e s in spec i a l cases , t h a t is, in w h i c h one of t h e p a i r is ful l a n d / o r a v e c t o r o r an e l e m e n t a r y ma t r i x . Also, a n e w s u b r o u t i n e for m u l t i p l i c a t i o n o f two s p a r s e m a t r i c e s is p r e s e n t e d . A d e t a i l e d d e s c r i p t i o n o f t h e i n d i v i d u a l s u b r o u t i n e s a n d i n p u t t h e r e t o is g iven in [1] a n d in t h e in i t i a l c o m m e n t s in e a c h s u b r o u t i n e .

5 citations


Proceedings Article
01 Jan 1983

2 citations


Book ChapterDOI
01 Jan 1983
TL;DR: A principal characteristic of multifrontal schemes for solving sparse sets of linear equations is their use of full submatrices during the elimination and thus, in the symbolic phases, much work and storage can be saved by only storing index lists rather than the full sub matrices.
Abstract: A principal characteristic of multifrontal schemes for solving sparse sets of linear equations is their use of full submatrices during the elimination and thus, in the symbolic phases, much work and storage can be saved by only storing index lists rather than the full submatrices.

01 Apr 1983
TL;DR: A different approach to the problem of finding a way to update a sparse Hessian approximation so that it will be positive-definite under reasonable circumstances is suggested, based on using a sparse Broyden, or Schubert, update directly on the Cholesky factor of the current Hessians approximation to define the next Hessian approximation implicitly in terms of its Choleski factorization.
Abstract: : A very important problem in numerical optimization is to find a way to update a sparse Hessian approximation so that it will be positive-definite under reasonable circumstances. This problem has motivated research -- which is yet to show much progress -- toward a "sparse BFGS method." In this paper, the authors suggest a different approach to the problem based on using a sparse Broyden, or Schubert, update directly on the Cholesky factor of the current Hessian approximation to define the next Hessian approximation implicitly in terms of its Cholesky factorization. This approach has the added advantage of being able to cheaply find the Newton step, since no factorization step is required. The difficulty with the approach is in finding a satisfactory secant or quasi-Newton condition to use in the update.