scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 1978"


Journal ArticleDOI
Fred G. Gustavson1
TL;DR: An O(M) algorithm is produced to solve A x = b where M is the number of multiplications needed to factor A into L U and the concept of an unordered merge plays a key role in obtaining this algorithm.
Abstract: Let A and B be two sparse matrices whose orders are p by q and q by r. Their product C -A B requires N nontrlvial multiplications where 0 <_ N <_ pqr. The operation count of our algorithm is usually proportional to N; however, its worse case is O(p, r, NA, N) where N A is the number of elements in A This algorithm can be used to assemble the sparse matrix arising from a finite element problem from the basic elements, using ~-1 [order (g)]2 operations where m is the total number of basic elements and order(g) is the order of the ~th element matrix. The concept of an unordered merge plays a key role m obtaining our fast multiplication algorithm It forces us to accept an unordered sparse row-wise format as output for the product C The permuted transposition algorithm computes ( R A ) T i n O(p, q, NA) operations where R is a permutation matrix It also orders an unordered sparse row-wise representation. We can combine these algorithms to produce an O(M) algorithm to solve A x = b where M is the number of multiplications needed to factor A into L U

366 citations


Journal ArticleDOI
TL;DR: A numerical comparison between algorithms for unconstrained optimization that take account of sparsity in the second derivative matrix of the objective function and what method to use in what circumstances is shown.
Abstract: This paper presents a numerical comparison between algorithms for unconstrained optimization that take account of sparsity in the second derivative matrix of the objective function. Some of the methods included in the comparison use difference approximation schemes to evaluate the second derivative matrix and others use an approximation to it which is updated regularly using the changes in the gradient. These results show what method to use in what circumstances and also suggest interesting future developments.

71 citations


01 Oct 1978
TL;DR: In this article, it was shown that the existence of an NP-complete set whose complement is sparse implies that the set P = NP, and that if there is a polynomial time reduction with sparse range to a PTAPE complete set, then P=PTAPE.
Abstract: Hartmanis and Berman have conjectured that all NP-complete sets are polynomial time isomorphic. A consequence of the conjecture is that there are no sparse NP-complete sets. We show that the existence of an NP-complete set whose complement is sparse implies P = NP. We also show that if there is a polynomial time reduction with sparse range to a PTAPE-complete set, then P=PTAPE. Keywords: reduction, polynomial time, nondeterministic polynomial time, complete sets, sparsity.

70 citations


Journal ArticleDOI
TL;DR: The conclusion is that partial pivoting codes perform well and that they should be considered for sparse problems whenever pivoting for numerical stability is reqmred.
Abstract: We compare several algorithms for sparse Gaussian elimination with column interchanges. The algorithms are all derived from the same basic elinunatmn scheme, and they (hffer mainly m lmplementatmn details. We examine their theoretmal behavior and compare thetr performances on a number of test problems with that of a high quality complete threshold pwotmg code. Our conclusion is that partial pivoting codes perform qmte well and that they should be considered for sparse problems whenever pivoting for numerical stability is reqmred.

42 citations


Journal ArticleDOI
TL;DR: This work was supported m part by the National Science Foundation under Grant NSF DCR 7307998, in partBy the Energy Research and Development Administration under Grant US ERDA E(ll-1) 2383, and in part by Yale University under a grant subcontracted from the United States Air Force Office of Scmntific Research.
Abstract: Received 17 June 1977 and 8 November 1977. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for chrect commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machmery. To copy otherwise, or to repubhsh, requires a fee and/or specific permission. This work was supported m part by the National Science Foundation under Grant NSF DCR 7307998, in part by the Energy Research and Development Administration under Grant US ERDA E(ll-1) 2383, and in part by Yale University under a grant subcontracted from the United States Air Force Office of Scmntific Research Author's address. Department of Computer Sciences, Painter 328, The Umverslty of Texas at Austin, Austin, TX 78712. @) 1978 ACM 0098-3500/78/1200-0391 $00.75

36 citations



Proceedings Article
13 Sep 1978
TL;DR: In this paper only certain aspects of a powerful system, called SPARCOM, are described, the unique approach of this system is the conceptual conversion of data into large sparse binary matrices that enables one to apply sophisticated sparse matrix techniques to perform data base operations.
Abstract: In order to meet the requirements of Large Data Base (LDB) Systems for short response time and high throughput a new approach has been investigated. In this paper only certain aspects of a powerful system, called SPARCOM, are described. The unique approach of this system is the conceptual conversion of data into large sparse binary matrices that enables one to apply sophisticated sparse matrix techniques to perform data base operations. Only for the sake of clarity the matrices are presented in their binary form. The high performance of the system is due to the direct conversion and manipulation of the matrices in their compact form as obtained from the application of the sparse matrix algorithms.

24 citations


01 Jan 1978
TL;DR: A variant of Gaussian elimination is presented for solving sparse symmetric systems of linear equations on computers with limited core storage, without the use of auxiliary storage such as disk or tape, thus trading an increase in work for a decrease in storage.
Abstract: : A variant of Gaussian elimination is presented for solving sparse symmetric systems of linear equations on computers with limited core storage, without the use of auxiliary storage such as disk or tape. The method is based on the somewhat unusual idea of recomputing rather than saving most nonzero entries in the reduced triangular system, thus trading an increase in work for a decrease in storage. For a nine-point problem with the nested dissection ordering on an n x n grid, fewer than (7/2)n-squared nonzeroes must be saved versus approx(-93/12)n-squared(logbase2n) for sparse elimination, while the work required at most doubles. The use of auxiliary storage in sparse elimination is also discussed. (Author)

21 citations


01 Oct 1978
TL;DR: The method is based on the bidiagonalization procedure of Golub and Kahan and is analytical equivalent to the method of conjugate gradients (CG) but possesses more favorable numerical properties.
Abstract: : A method is given for solving Ax = b and min value of (Ax-b) sub 2 where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytical equivalent to the method of conjugate gradients (CG) but possesses more favorable numerical properties. The Fortran implementation of the method (subroutine LSQR) incorporates reliable stopping criteria and provides estimates of various quantities including standard errors for x and the condition number of A. Numerical tests are described comparing LSQR with several other CG algorithms. Further results for a large practical problem illustrate the effect of pre-conditioning least-squares problems using a sparse LU factorization of A. (Author)

18 citations


Journal ArticleDOI
TL;DR: It is shown that an algorithm for solving a system of nonlinear equations where the Jacobian is known to be sparse, converges locally and Q-superlinearly.
Abstract: It is shown that an algorithm for solving a system of nonlinear equations where the Jacobian is known to be sparse, converges locally and Q-superlinearly.

16 citations


Journal ArticleDOI
Fred G. Gustavson1
TL;DR: The subroutine TRSPMX of ACM Algorithm 408 was compared with Algorithm HALFP and the following corrections appear to be needed: Before statement DO 5, insert the line IF (NC.LE).
Abstract: The subroutine TRSPMX of ACM Algorithm 408 was compared with Algorithm HALFP [1] and the following corrections appear to be needed: (1) Before statement DO 5 . . . , insert the line IF (NC.LE.1) GO TO 100 (2) After label 5 insert the line 100 J2 -0 The need for correction (1) is required when the matrix is a column vector (NC ffi 1). The need for correction (2) was noted in [2] as TRSPMX fails when transposing a matrix with an empty first row.

Journal ArticleDOI
TL;DR: Two subroutines for node reordering to preserve sparsity of the Jacobian matrix and a subroutine for solving the matrix by the Doolittle method are included.
Abstract: A computer program for Water Network analysis has been developed usingy sparse matrix techniques for solution The method used is the successive linearization method where the set of equations [Q]=[Y][H] is solved In the process, two subroutines for node reordering to preserve sparsity of the Jacobian matrix and a subroutine for solving the matrix by the Doolittle method are included The method is user oriented and requires a minimum amount of input data and effort Networks of up to 200 nodes and 300 pipes may be analysed on an IBM 1130 computer with 16K words of memory

Journal ArticleDOI
Griss1
TL;DR: The use of an efficient sparse minor expansion method to directly compute the subresultants needed for the greatest common denominator (GCD) of two polynomials is described.
Abstract: In this paper, the use of an efficient sparse minor expansion method to directly compute the subresultants needed for the greatest common denominator (GCD) of two polynomials is described. The sparse minor expansion method (applied either to Sylvester's or Bezout's matrix) naturally computes the coefficients of the subresultants in the order corresponding to a polynomial remainder sequence (PRS), avoiding wasteful recomputation as much as possible. It is suggested that this is an efficient method to compute the resultant and GCD of sparse polynomials.

Dissertation
01 Jan 1978
TL;DR: The numerical solution of sparse matrix equations by fast methods and associated computational techniques using state-of-the-art statistical techniques.
Abstract: The numerical solution of sparse matrix equations by fast methods and associated computational techniques


Journal ArticleDOI
TL;DR: This paper presents a new, optimal (according to a criterion defined later), sparsity-oriented, ordering algorithm for application in sparse matrix calculations and is shown to be better, i.e. less fill-in, than the clustering method of reference.
Abstract: This paper presents a new, optimal (according to a criterion defined later), sparsity-oriented, ordering algorithm for application in sparse matrix calculations. A dynamic programming algorithm which determines an ordered elimination such that the total number of fill-in terms is minimum is developed. The ordering algorithm is shown to be better, i.e. less fill-in, than the clustering method of reference [4]. The algorithm, is valid for diagonally dominant matrices which are symmetric in pattern of nonzero elements. The method has been found practical for ordering matrices appearing in a wide variety of engineering applications. Presently, it can be efficiently applied to matrices of the order of 50 rows.

Proceedings ArticleDOI
13 Nov 1978
TL;DR: The applicability of AP-120B/190L array processors for solving large systems of simultaneous linear equations using sparse matrix techniques, used in such diverse fields as structural analysis, chemical engineering, electric power network analysis, computer-aided network design, and physics are described.
Abstract: This paper describes the applicability of AP-120B/190L array processors for solving large systems of simultaneous linear equations using sparse matrix techniques. These techniques are used in such diverse fields as structural analysis, chemical engineering, electric power network analysis, computer-aided network design, and physics. Several sparse matrix benchmark programs which have been written for the AP-120B/190L are described. Results of these programs are presented for various hardware configurations. Finally, the performance on sparse matrix problems of the AP-120B/190L is compared to that of large vector machines.

Journal ArticleDOI
01 Oct 1978
TL;DR: Algorithms for formulating the hybrid equations of linear multiports are derived by means of block matrix elimination and can be implemented using sparse matrix techniques.
Abstract: Algorithms for formulating the hybrid equations of linear multiports are derived by means of block matrix elimination. The algorithms can be implemented using sparse matrix techniques. The computational requirements of each algorithm are determined, and rules on how to choose the most efficient algorithm for solving a given problem are obtained.

Journal ArticleDOI
TL;DR: This paper shall discuss in this paper how sparse matrix operations might be embedded in SNOBOL4, using the programmer-defined function capability, the TABLE datatype, and the APPLY built-in function which allows function names to be treated as variables.
Abstract: We shall discuss in this paper how sparse matrix operations might be embedded in SNOBOL4, using the programmer-defined function capability, the TABLE datatype, and the APPLY built-in function which allows function names to be treated as variables.


Journal ArticleDOI
TL;DR: Reliability and efficiency of the work of the package SPARSE is attained because of the choice of a corresponding system organization, based on the ideas of structured programming.
Abstract: A DESCRIPTION is given of the structure of the package SPARSE for the solution of systems of linear algebraic equations arising from the integration of systems of ordinary differential equations. Reliability and efficiency of the work of the package is attained because of the choice of a corresponding system organization, based on the ideas of structured programming. The package was realized in FORTRAN on the MESM-6 computer.