scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 1982"


Journal ArticleDOI
TL;DR: Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I ~QR is the most reliable algorithm when A is ill-conditioned.
Abstract: An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I~QR is the most reliable algorithm when A is ill-conditioned.

4,189 citations


Book ChapterDOI
01 Jan 1982
TL;DR: An algorithm is described for solving large-scale nonlinear programs whose objective and constraint functions are smooth and continuously differentiable.
Abstract: An algorithm is described for solving large-scale nonlinear programs whose objective and constraint functions are smooth and continuously differentiable The algorithm is of the projected Lagrangian type, involving a sequence of sparse, linearly constrained subproblems whose objective functions include a modified Lagrangian term and a modified quadratic penalty function

331 citations



Journal ArticleDOI
TL;DR: An implementation of sparse ${LDL}^T$ and LU factorization and back-substitution, based on a new scheme for storing sparse matrices, is presented and appears to be as efficient in terms of work and storage as existing schemes.
Abstract: An implementation of sparse ${LDL}^T$ and LU factorization and back-substitution, based on a new scheme for storing sparse matrices, is presented. The new method appears to be as efficient in terms of work and storage as existing schemes. It is more amenable to efficient implementation on fast pipelined scientific computers.

202 citations


Journal ArticleDOI
TL;DR: A comprehensive set of test problems will lead to a better understanding of the range of structures in sparse matrix problems and thence to better classification and development of algorithms.
Abstract: The development, analysis and production of algorithms in sparse linear algebra often requires the use of test problems to demonstrate the effectiveness and applicability of the algorithms. Many algorithms have been developed in the context of specific application areas and have been tested in the context of sets of test problems collected by the developers. Comparisons of algorithms across application areas and comparisons between algorithms has often been incomplete, due to the lack of a comprehensive set of test problems. Additionally we believe that a comprehensive set of test problems will lead to a better understanding of the range of structures in sparse matrix problems and thence to better classification and development of algorithms. We have agreed to sponsor and maintain a general library of sparse matrix test problems, available on request to anyone for a nominal fee to cover postal charges. Contributors to the library will, of course, receive a free copy.

192 citations


Journal ArticleDOI
TL;DR: It is verified (by many numerical experiments) that the use of sparse matrix techniques with IR may also result in a reduction of both the computing time and the storage requirements.
Abstract: It is well known that if Gaussian elimination with iterative refinement (IR) is used in the solution of systems of linear algebraic equations $Ax = b$ whose matrices are dense, then the accuracy of the results will usually be greater than the accuracy obtained by the use of Gaussian elimination without iterative refinement (DS). However, both more storage (about $100\% $, because a copy of matrix A is needed) and more computing time (some extra time is needed to perform the iterative process) must be used with IR. Normally, when the matrix is sparse the accuracy of the solution computed by some sparse matrix technique and IR will still be greater. In this paper it is verified (by many numerical experiments) that the use of sparse matrix techniques with IR may also result in a reduction of both the computing time and the storage requirements (this will never happen when IR is applied for dense matrices). Two parameters, a drop-tolerance $T \geqq 0$ and a stability factor $u > 1$, are introduced in the effo...

63 citations


Journal ArticleDOI
TL;DR: Several algorithms are developed which extend the method of George and Heath for sparse linear least squares problems to include rank-deficient problems, linear equality constrained problems, and updating of solutions.
Abstract: Several algorithms are developed which extend the method of George and Heath for sparse linear least squares problems to include rank-deficient problems, linear equality constrained problems, and updating of solutions. An application of these methods to the solution of sparse square nonsymmetric linear systems is also presented.

57 citations


Book ChapterDOI
01 Jan 1982
TL;DR: The benefits of using full matrix techniques in the later stages of Gaussian elimination are indicated and frontal and multi-frontal schemes where such benefits are obtained automatically are described.
Abstract: We discuss ways in which code for Gaussian elimination on full systems can be used in crucial parts of the code for the solution of sparse linear equations. We indicate the benefits of using full matrix techniques in the later stages of Gaussian elimination and describe frontal and multi-frontal schemes where such benefits are obtained automatically. We also illustrate the advantages of such approaches when running sparse codes on vector machines.

50 citations


Journal ArticleDOI
TL;DR: The a lgor i thms realized by GPSKCA provides the same ma thema t i ca l capabil i t ies as provided by R E D U C E, and removes some implicit restr ict ions on the matr ices t ha t can be reordered.
Abstract: Given the s t ructure of a symmet r i c or s t ructural ly symmet r i c sparse matr ix, G P S K C A a t t e m p t s to find a synlnaetric reorder ing of the mat r ix tha t produces a smal ler bandwid th or profile. References [1], [4], [5], and [6] explain in detail the a lgor i thms realized by GPSKCA. Th i s a lgor i thm provides the same ma thema t i ca l capabil i t ies as provided by R E D U C E , Algor i thms 508, and 509, bu t requires less m e m o r y and t ime and removes some implicit restr ict ions on the matr ices t ha t can be reordered. A descript ion of the differences in the implementa t ion and their effects is given in [7]; G P S K C A and R E D U C E produce the same bandwid th and profile on all p rob lems for which R E D U C E executes successfully. T h e package of subrout ines is evoked by the F O R T R A N s t a t e m e n t

41 citations


Journal ArticleDOI
TL;DR: The sparse form of one of the most successful Variable Metric Methods (BFGS) is used to solve power system optimization problems using the sparse factors of the Hessian matrix as opposed to a full inverse Hessian.
Abstract: The sparse form of one of the most successful Variable Metric Methods (BFGS [1, 2]) is used to solve power system optimization problems. The main characteristic of the method is that the sparse factors of the Hessian matrix are used as opposed to a full inverse Hessian. In addition, these factors are updated at every BFGS iteration using a fast and robust sparsity oriented updating algorithm.

21 citations


Journal ArticleDOI
TL;DR: This work considers direct methods based on Gaussian elimination for solving sparse sets of linear equations using a “multi-frontal” technique that moves the reals within storage in such a way that all operations are performed on full matrices although the pivotal strategy is minimum degree.


Journal ArticleDOI
Amir Schoor1
TL;DR: A fast algorithm for the multiplication of two sparse matrices, whose average time complexity is an order of magnitude better than that of standard known algorithms, and which is able to avoid the additional unnecessary index comparisons, thus only requiring O(D,D&NK) time.

Book ChapterDOI
01 Jan 1982
TL;DR: In this paper, the authors describe frontal schemes for the solution of large sparse sets of linear equations and discuss the implementation of a code in the Harwell Subroutine Library which solves unsymmetric systems using this approach.
Abstract: We first describe frontal schemes for the solution of large sparse sets of linear equations and then discuss the implementation of a code in the Harwell Subroutine Library which solves unsymmetric systems using this approach. We indicate the performance of our software on some test examples.

Book ChapterDOI
01 Jan 1982
TL;DR: This paper surveys software for the solution of sparse sets of linear equations and examines codes which can be used to solve equations arising in the solutions of elliptic partial differential equations.
Abstract: This paper surveys software for the solution of sparse sets of linear equations. In particular we examine codes which can be used to solve equations arising in the solution of elliptic partial differential equations.

Book ChapterDOI
Linda Kaufman1
01 Jan 1982
TL;DR: It is found that these matrices often have some useful algebraic structure, and that classical methods often applied to solving nonsingular problems arising in the study of differential equations can be used for these types of problems.
Abstract: In this paper we describe iterative methods for finding the null vectors of large sparse unsymmetric singular matrices which arise while modeling queuing networks. We find that these matrices often have some useful algebraic structure, and that classical methods often applied to solving nonsingular problems arising in the study of differential equations can be used for these types of problems.

ReportDOI
01 Oct 1982
TL;DR: The description and use of a Fortran general sparse solver, modified to run efficiently on a vector processor, and CRAY-1 performance in the analysis of 2D grids is presented.
Abstract: : Description and use of a Fortran general sparse solver, modified to run efficiently on a vector processor, is given. CRAY-1 performance in the analysis of 2D grids is presented. (Author)

Book ChapterDOI
01 Jan 1982
TL;DR: Problems involving large systems of ordinary differential equations with sparse Jacobian matrices can be solved efficiently using low-order L-stable one-step methods using a semi implicit Runge-Kutta method.
Abstract: Problems involving large systems of ordinary differential equations with sparse Jacobian matrices can be solved efficiently using low-order L-stable one-step methods. Sparse matrix techniques are applied to reduce computational work and to save storage when solving the large systems of linear equations that arise. Variable stepsize strategies have to be used as the systems of ODE's are normally stiff. Iterative refinement is used in connection with incomplete factorisations obtained from the use of drop-tolerances during the factorisation process. This combination leads to reductions in both storage consumption and in the number of evaluations of the Jacobian matrix that has to be performed. Evidence of the efficiency of the strategies involved are given in the form of numerical results from a FORTRAN program package SPARKS that employs a semi implicit Runge-Kutta method.



Journal ArticleDOI
TL;DR: A fast algorithm for composition of sparse maps, which act as the identity on most of the elements of their domain) is presented.