scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 1979"


Journal ArticleDOI
TL;DR: The authors discuss ways of overcoming problems and otherwise improving the performance and flexibility of the code, and decide to use standard Fortran for the new code.
Abstract: has been used either as a stand-alone subroutine or within larger programming packages in many centers throughout the world. Although most users have been very pleased with the performance of these subroutines they have criticized them because (i) they are not easily portable since they use facilities peculiar to the IBM 360/370 series (although some improvements have been made since the original version), (ii) the data interface is not very convenient for the user, particularly when providing a matrix with nonzero pattern identical to one already treated, (iii) the initial decomposition phase is slow when the factorized form has more than a very small average number of nonzeros per row, and (iv) some improvement can be made to the speed of the final solution phase. Since the summer of 1975, the authors have been discussing ways of overcoming these problems and otherwise improving the performance and flexibility of the code; our purpose here is to explain the decisions we have reached. Perhaps our easiest decision was to use standard Fortran for the new code, checking it with

103 citations


Journal ArticleDOI
TL;DR: It is shown that the existence of an NP- complete set whose complement is sparse implies P = NP, and that if there is a polynomial time reduction with sparse range to a PTAPE-complete set, then P=PTAPE.
Abstract: Hartmanis and Berman have conjectured that all $NP$-complete sets are polynomial time isomorphic. A consequence of the conjecture is that there are no sparse $NP$-complete sets. We show that the existence of an $NP$-complete set whose complement is sparse implies $P = NP$. We also show that if there is a polynomial time reduction with sparse range to a $PTAPE$-complete set, then $P = PTAPE$.

74 citations



Journal ArticleDOI
TL;DR: A sparse matrLx package is described which effectively insulates the user from these considerations, but which still allows the user to conveniently use the package m a variety of ways.
Abstract: Software for solving sparse systems of hnear equations typically involves fairly complicated data structures and storage management In many cases the user of such software simply wants to solve a system of equations, and should not have to be concerned with the way this storage management is actually done, or the way the matrix components are actually stored. In this paper we describe a sparse matrLx package which effectively insulates the user from these considerations, but which still allows the user to conveniently use the package m a variety of ways.

51 citations


Journal ArticleDOI
TL;DR: This work considers the problem of triangulating a sparse matrix in a parallel processing system and attempts to answer the following questions: how should the rows and columns of the matrix be reordered in order to minimize the completion time of the parallel triangulation process if an unrestricted number of processors are used.
Abstract: We consider the problem of triangulating a sparse matrix in a parallel processing system and attempt to answer the following questions: 1) How should the rows and columns of the matrix be reordered in order to minimize the completion time of the parallel triangulation process if an unrestricted number of processors are used? 2) If the number of processors is fixed, what is the minimum completion time and how should the parallel operations be scheduled? Implementation of the parallel algorithm is discussed and experimental results are given

47 citations


01 Nov 1979
TL;DR: This paper proposes special methods of sparse Gaussian elimination for staircase-structured systems, particularly applicable to linear programming problems whose constraints have a staircase structure and may also find application in solving staircase linear systems that arise in nonlinear optimization and optimal control.
Abstract: : A square system of linear equations is said to be sparse if it can be solved most efficiently through a knowledge of its arrangement of zero and nonzero coefficients. Sparse systems are commonly solved by the techniques of sparse Gaussian elimination. An important class of sparse systems are those that have a 'staircase' structure: their variables fall into a natural sequence of disjoint groups, and each equation relates only variables within the group or within two adjacent groups. This paper proposes special methods of sparse Gaussian elimination for staircase-structured systems. These methods are particularly applicable to linear programming problems whose constraints have a staircase structure; they may also find application in solving staircase linear systems that arise in nonlinear optimization and optimal control. The initial sections of this paper present a self-contained review of sparse elimination, and derive pertinent properties of staircase systems. Subsequent sections pursue two approaches to staircase elimination, and report initial computational experience in detail. (Author)

23 citations


Journal ArticleDOI
TL;DR: The use of vector processors for the analysis of large integrated circuits is discussed in this article, where the CRAY-1 processor is evaluated by examining the performance of kernels of a typical circuit analysis program.
Abstract: Mathematical models for current vector processors are presented and the use of these models in performance evaluation discussed. Methods of defining vectors in the solution of sparse equations are summarized, and the vectorized solution of several classes of sparse problems are illustrated. Equation reordering based on symmetrical operations on a graph (folding, rotation) are shown to produce vectors in the solution. Finally, the use of a current vector processor, the CRAY-1, for the analysis of large integrated circuits is evaluated by examining the performance of kernels of a typical circuit analysis program.

18 citations


Journal ArticleDOI
TL;DR: Combining the concepts described in this paper makes it possible to solve inter-regional input-output systems (and other types of large, sparse, linear systems) with considerable efficiency in storage and computation.
Abstract: Combining the concepts described in this paper makes it possible to solve inter-regional input-output systems (and other types of large, sparse, linear systems) with considerable efficiency in storage and computation. The exact number of operations and corresponding savings in computational time and storage depend on the particular zero-non zero structure of each matrix in the system, but in any case the savings can be enormous. A recommended procedure is summarized below. 1. Take the entire inter-regional IO system which includes the representation of each of the regions and of the links among them and express it in the formMx=Bz or, more simply,Mx=b (since bothB andz are known). It may be helpful for this purpose to draw a picture of the large matrixM. 2. PartitionM into blocks, exploiting the structure of the particular system. For example, at the very least, the matrix or matrices corresponding to each region will probably be separate blocks. The analysis required for this step may lead to reformulating the matrix representation of the given economic system by, for example, replacing a set of equations with linear combinations of these same equations, particularly for the equations representing the links among regions. 3. Identify an appropriate partition of the blocks ofM, and a corresponding partition of the vectorsx andb, for performing a block factorization. This solution algorithm, which solves forx inMx=b (whereM now represents the entire system) is not available as a packaged program and so for the foreseeable future must be written in ‘home-made’ computer code that assumes a sparse matrix storage scheme compatible with the package to be used (as in 4 below). The algorithm for block factorization in the 2×2 case relevant to our particular inter-regional IO system is given in Section 8 of this paper. 4. Some steps of the algorithm require solving smaller systems of the formHu=v foru, given some matrixH and some vectorv. Efficient packaged subroutines can be obtained at low cost in order to solve these systems by finding the block triangular form corresponding to a particularH, then performing theLU factorization of the diagonal blocks only. The home-made code will interact with these subroutines.

11 citations



Journal ArticleDOI
01 Sep 1979
TL;DR: The problem of sorting all the rows of a sparse matrix according to increasing or decreasing column indices is considered and an algorithm for doing the sort in order τ operations is given.
Abstract: The problem of sorting all the rows of a sparse matrix according to increasing or decreasing column indices is considered. An algorithm for doing the sort in order τ operations (where τ is the number of nonzeroes in the matrix) is given.

5 citations


Journal ArticleDOI
R. Saeks1
01 Apr 1979

25 Jan 1979
TL;DR: A variety of assembly language codes developed for the CRAY-1 are described, including a vectorized general direct sparse equation solver, coding and performance of kernels arising from vector solution of subsystems of large sparse systems, and highly efficient 'gather' and 'scatter' codes useful in equation formulation.
Abstract: : This report describes a variety of assembly language codes developed for the CRAY-1 as part of general studies on implementation and use of linear algebra algorithms on vector machines. Included are descriptions of the performance and use of a vectorized general direct sparse equation solver, coding and performance of kernels arising from vector solution of subsystems of large sparse systems, and highly efficient 'gather' and 'scatter' codes useful in equation formulation. (Author)

ReportDOI
01 Aug 1979
TL;DR: An algorithm is presented which performs Gauss elimination on a very large sparse matrix, plus a set of subroutines, which can perform multiplication, transposition, addition, factorization, etc., with sparse matrices, taking full advantage of their property of being sparse.
Abstract: An algorithm is presented which performs Gauss elimination on a very large sparse matrix. A sparse matrix is a matrix having most of its elements equal to zero. Only the non-zeroes are stored and manipulated. When the size of the sparse matrix exceeds the available main storage of the computer, it becomes necessary to partition the matrix into submatrices, which are then stored in auxiliary storage and brought to main storage only when required. Each submatrix is a sparse matrix. In addition, there are practical cases in which it is possible to define the partition in such a way that most of the submatrices will be zero matrices. When this is achieved, the submatrices are considered to be the elements of the main matrix, and the main matrix is said to be supersparse: a sparse matrix having as its elements sparse submatrices. The algorithm presented here is essentially an efficient and automatic storage management procedure, plus a set of subroutines, which can perform multiplication, transposition, addition, factorization, etc., with sparse matrices, taking full advantage of their property of being sparse. 1 table.

Journal ArticleDOI
TL;DR: Motivation for the choice of order of elimination in dealing with sparse symmetric matrices is provided by consideration of a network analogue and then generalized to asymmetricMatrices.
Abstract: Motivation for the choice of order of elimination in dealing with sparse symmetric matrices is provided by consideration of a network analogue and then generalized to asymmetric matrices.