scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 1977"


Journal ArticleDOI
01 Apr 1977
TL;DR: This paper surveys the state of the art in sparse matrix research in January 1976, and discusses the solution of sparse simultaneous linear equations, including the storage of such matrices and the effect of paging on sparse matrix algorithms.
Abstract: This paper surveys the state of the art in sparse matrix research in January 1976. Much of the survey deals with the solution of sparse simultaneous linear equations, including the storage of such matrices and the effect of paging on sparse matrix algorithms. In the symmetric case, relevant terms from graph theory are defined. Band systems and matrices arising from the discretization of partial differential equations are treated as separate cases. Preordering techniques are surveyed with particular emphasis on partitioning (to block triangular form) and tearing (to bordered block triangular form). Methods for solving the least squares problem and for sparse linear programming are also reviewed. The sparse eigenproblem is discussed with particular reference to some fairly recent iterative methods. There is a short discussion of general iterative techniques, and reference is made to good standard texts in this field. Design considerations when implementing sparse matrix algorithms are examined and finally comments are made concerning the availability of codes in this area.

242 citations


BookDOI
01 Jan 1977

97 citations




Journal ArticleDOI
01 Dec 1977-Networks
TL;DR: A new data structure is presented along with the updating formulae for storing and updating this inverse matrix, which is a generalization of the well-known product form procedure frequently used in mathematical programming codes.
Abstract: When solving multicommodity network flow problems with either a primal or a dual partitioning technique one must carry and update a working basis inverse whose size need never exceed the number of saturated arcs (i.e. arcs for which there is no excess capacity). Efficient procedures have appeared in the literature for updating this inverse if it is carried in explicit form. However, for real world multicommodity network flow problems, this inverse may become quite large. In addition, this matrix may be quite dense even though the working basis may be sparse. In an attempt to obtain a sparse representation of this inverse matrix and simplify the computational burden, we present a new data structure along with the updating formulae for storing and updating this matrix. This data structure is a generalization of the well-known product form procedure frequently used in mathematical programming codes.

14 citations


ReportDOI
15 Jan 1977
TL;DR: Two new clases of computational algorithms for the solution of large, sparse unsymmetric sets of simultaneous equations are described, each of which detects matrix structure suitable for vector processing and, potentially, for faster processing on cache machines.
Abstract: : The direct solution of large, sparse unsymmetric sets of simultaneous equations is commonly involved in the numerical solution of algebraic, differential, and partial differential equations. This report describes two new clases of computational algorithms for the solution of such equations. Each algorithm detects matrix structure suitable for vector processing and, potentially, for faster processing on cache machines. One procedure favors structure usually associated with small sparse matrices; one is directed toward sets of equations requiring a large backing store. Comparisons of timing (on a chache machine) and of memory requirements are made between these new procedures and existing general sparsity techniques for a variety of science-engineering examples. Issues related to implementation are given for software implementations of the two algorithms. (Author)

6 citations


Journal ArticleDOI
TL;DR: In this article, a class of nonsingular topology-symmetric perfect elimination sparse matrices with an average number of nonzero upper triangular elements equal to is considered, and the expected time complexity for the factorization is shown to be of order no greater than both for the essential and the overhead operations.
Abstract: A class of nonsingular topology-symmetric perfect elimination sparse matrices with an average number of nonzero upper triangular elements equal to is considered. The expected time complexity for the factorization is shown to be of order no greater than both for the essential and the overhead operations. The time complexity for the repeat solution is shown to be of order and for inversion of order . The implications of these results to sparse matrices in general are discussed.

4 citations


Journal ArticleDOI
TL;DR: The subroutines RDSPMX, AD SPMX, MUSPMX, TRSP MX, M V S P M X, and WRSPMX of ACM Algorithm 408 were tested after conversion to Basic Fortran, and the following corrections appear to be needed.
Abstract: The subroutines RDSPMX, ADSPMX, MUSPMX, TRSPMX, M V S P M X , and WRSPMX of ACM Algorithm 408 were tested after conversion to Basic Fortran, and the following corrections appear to be needed: (1) In ADSPMX, the line after statement number 9 should be changed to IF (T.EQ.0.0) GO TO 911 and before statement number 11 the following line should be inserted: 911 JB = JB -I1 (2) In TRSPMX, after statement number 5 the following line should be inserted: J2 = 0 The error in ADSPMX showed up when adding two matrices containing elements with opposite values in corresponding positions, which should cancel; the error in TRSPMX was noted when transposing a matrix having a null first line.

4 citations






Journal ArticleDOI
TL;DR: A simple compact method of storing the system matrix is described, which substantially reduces storage requirements and improves the efficiency of the computation by eliminating most multiplications by zero.
Abstract: Large sparse linear systems of equations occur frequently in engineering mathematics, and are usually solved by an iterative method such as successive over‐relaxation. One of the principal advantages of iterative methods is that the zeros of the system matrix are not destroyed. This property can be used to substantially reduce both computer storage requirements and the number of arithmetic operations. Most standard texts, however, do not emphasize this point. In this paper, a simple compact method of storing the system matrix is described, which substantially reduces storage requirements and improves the efficiency of the computation by eliminating most multiplications by zero.


Proceedings ArticleDOI
13 Jun 1977
TL;DR: In this paper only certain aspects of a powerful Data Base System called SPARCOM, as reflected by some applications in a crime-combating environment are presented.
Abstract: In this paper only certain aspects of a powerful Data Base System called SPARCOM, as reflected by some applications in a crime-combating environment are presented. SPARCOM stands for Sparse Associative Relational Connection Matrix. It is a method that was developed for the analysis, interpretation, organization, classification, update, and structure of stored data as well as for the search and retrieval of information from large data base (LDB) systems. The unique approach of this system is the conversion of data into large sparse binary matrices that enables one to apply sophisticated sparse matrix techniques to perform data base operations. The operations are performed on the matrices as though the entire matrix were present, but great amounts of storage space are saved, and execution time is significantly reduced by the storage and manipulation of the nonzero values only. Additional reduction in storage requirements and in execution time is achieved by SPARCOM's intrinsic normalization process that alleviates the grave problem of data redundancy, caused by multi-value attributes.