scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 1976"


Journal ArticleDOI
TL;DR: Extensive testing on finite element matrices indicates that the algorithm typically produces bandwidth and profile which are comparable to those of the commonly-used reverse Cuthill–McKee algorithm, yet requires significantly less computation time.
Abstract: A new algorithm for reducing the bandwidth and profile of a sparse matrix is described. Extensive testing on finite element matrices indicates that the algorithm typically produces bandwidth and profile which are comparable to those of the commonly-used reverse Cuthill–McKee algorithm, yet requires significantly less computation time.

569 citations


Journal ArticleDOI
TL;DR: This paper compares and analyzes six algorithms which have been suggested recently for use in reducing, by permutations, the bandwidth and profile of sparse matrices.
Abstract: : This paper compares and analyzes six algorithms which have been suggested recently for use in reducing, by permutations, the bandwidth and profile of sparse matrices. This problem arises in many different areas of scientific computation such as in the finite element method for approximating solutions of partial differential equations and in analyzing large-scale power transmission systems.

105 citations


Book ChapterDOI
Fred G. Gustavson1
01 Jan 1976
TL;DR: This chapter presents a non-recursive version of Tarjan's algorithm, called STCO, along with a detailed analysis and a sparse matrix interpretation, and describes some new heuristics and supplement results with a further statistical study.
Abstract: Publisher Summary This chapter deals with efficient algorithms that find the block lower triangular form of a sparse matrix. The best algorithm has two stages, namely, finding a maximal assignment for A followed by finding the strong components of the associated directed graph. The recursive algorithm finds the strong components of a directed graph in O (V, E) time. The chapter presents a non-recursive version of Tarjan's algorithm, called STCO, along with a detailed analysis and a sparse matrix interpretation. The analysis reveals the constants of the linear bounds in V and E. The work of Hoperoft and Karp is an important contribution to the maximal assignment problem. The chapter describes some new heuristics and supplement results with a further statistical study. The more complicated parts of ASSIGN ROW and the Hoperoft–Karp algorithm are hardly ever used.

34 citations



Book ChapterDOI
01 Jan 1976
TL;DR: In this paper, the authors describe the design of sparse Gaussian elimination codes and examine possible tradeoffs among the design goals of flexibility, speed, and small size, and discuss the effects of certain flexibility and cost constraints on the design.
Abstract: Publisher Summary This chapter explains the considerations required in the design of software for sparse Gaussian elimination. Several implementations of sparse Gaussian elimination have been developed to solve systems. When a large sparse system of linear equations Ax = b is considered, where A is an N × N sparse nonsymmetric matrix and x and b are vectors of length N, the basic idea of all of these is to factor A and to compute x without storing or operating on zeroes in A, L, or U, where L is a unit lower triangular matrix and U is a unit upper triangular matrix. Doing this requires a certain amount of storage and operational overhead, that is, extra storage for pointers in addition to that needed for nonzeroes, and extra nonnumeric bookkeeping operations in addition to the required arithmetic operations. All these implementations of sparse Gaussian elimination generate the same factorization of A and avoid storing and operating on zeroes. Thus, they all have the same costs as measured in terms of the number of nonzeroes in L and U or the number of arithmetic operations performed. However, the implementations do have different overhead requirements and thus their total storage and time requirements vary a great deal. The chapter describes the design of sparse Gaussian elimination codes. It discusses the effects of certain flexibility and cost constraints on the design, and examines possible tradeoffs among the design goals of flexibility, speed, and small size.

23 citations


Journal ArticleDOI
TL;DR: An efficient algorithm for the Cramer's rule solution of large "sparse" linear systems with polynomial coefficients (many coefficients zero), as an alternative to sparse elimination methods is developed.
Abstract: I t has been shown recently [3, 4, 7\"1 that the determinant of an N X N matrix with polynomial elements can be computed efficiently via minor expansion, even though Gaussian elimination is well known to require only 0 (N 3) operations [8, pp. 425, 440]. The point is that the cost of polynomial operations changes dramatically with the nature of the polynomials; Gaussian elimination tends to rapidly \"degrade\" all polynomial entries in the matrix, while minor expansion uses only the original matrix elements. In particular, minor expansion is superior when the polynomial entries are sparse (many zero coefficients in a full monomial expansion), whereas a Bareiss-type fraction-free elimination method [1] becomes dominant in the dense polynomial case. A minor-expansion algorithm (and explicit A LTRA N program) designed to efficiently compute the determinant of an N X N \"full\" matrix (almost all elements and minors are nonzero) is described by Gentleman [3]. This would be used via Cramer's rule to solve full linear systems with (sparse) polynomial coefficients. In this paper we extend these ideas to develop an efficient algorithm for the Cramer's rule solution of large \"sparse\" linear systems with polynomial coefficients (many coefficients zero), as an alternative to sparse elimination methods. Previous work by the author studied a large (66 ×66) sparse linear system, using a fraction-free Gaussian elimination scheme with optimal pivoting [5]. Since the coefficient matrix involved many symbolic parameters (i.e. sparse polynomials), the aim is to develop a minor-expansion algorithm for matrices of this size.

19 citations


Book ChapterDOI
01 Jan 1976
TL;DR: In surveying several algorithms from each of these categories, the potential for investigating and applying sparse system techniques in optimization is demonstrated.
Abstract: Sparse matrices with special structure arise in almost every application of large scale optimization. In linear programming, these problems usually are solved by pivoting procedures, most notably the simplex method, refined and modified in various ways to exploit structure. More recently iterative relaxation methods and dual ascent algorithms have been proposed for certain applications. In surveying several algorithms from each of these categories, this paper demonstrates the potential for investigating and applying sparse system techniques in optimization.

17 citations


ReportDOI
01 Aug 1976
TL;DR: The method combines efficient sparse matrix techniques as in the revised simplex method with stable variable-metric methods for handling the nonlinearities and a general-purpose production code (MINOS) is described.
Abstract: : An algorithm for solving large-scale nonlinear programs with linear constraints is presented. The method combines efficient sparse matrix techniques as in the revised simplex method with stable variable-metric methods for handling the nonlinearities. A general-purpose production code (MINOS) is described, along with computational experience on a wide variety of problems.

16 citations


Book ChapterDOI
01 Jan 1976
TL;DR: The data access requirements for typical sparse matrix computations are considered, and some of the main data structures used to meet these demands are reviewed.
Abstract: In this paper we consider the problem of designing and implementing computer software for sparse matrix computations. We consider the data access requirements for typical sparse matrix computations, and review some of the main data structures used to meet these demands. We also describe some tools and techniques we have found useful for developing sparse matrix software.

14 citations


Book ChapterDOI
01 Jan 1976
TL;DR: Graph-theoretic techniques for analyzing the solution of large sparse systems of linear equations by partitioning and using block methods are developed and questions concerning stability of block Methods are discussed.
Abstract: Graph-theoretic techniques for analyzing the solution of large sparse systems of linear equations by partitioning and using block methods are developed. Questions concerning stability of block methods are discussed.

12 citations


Journal ArticleDOI
TL;DR: A bipartite graph representation is proposed for the study of pivot strategies on sparse matrices and an algorithm which fullfills the Brayton's condition for Gaussian elimination optimality has been devised.
Abstract: In this note a bipartite graph representation is proposed for the study of pivot strategies on sparse matrices Using this representation, an algorithm which fullfills the Brayton's condition for Gaussian elimination optimality has been devised

Proceedings ArticleDOI
20 Oct 1976
TL;DR: An improved algorithm for computing the minors of a (large) sparse matrix of polynomials is described, with emphasis on efficiency and optimal ordering.
Abstract: An improved algorithm for computing the minors of a (large) sparse matrix of polynomials is described, with emphasis on efficiency and optimal ordering. A possible application to polynomial resultant computation is discussed.

Proceedings ArticleDOI
07 Jun 1976
TL;DR: A new bandwidth reduction method for sparse matrices which promises to be both fast and effective in comparison with known methods is described.
Abstract: The paper describes a new bandwidth reduction method for sparse matrices which promises to be both fast and effective in comparison with known methods. The algorithm operates on the undirected graph corresponding to the incidence matrix induced by the original sparse matrix, and separates into three distinct phases: (1) determination of a spanning tree of maximum length, (2) modification of the spanning tree into a free level structure of small width, (3) level-by-level numbering of the level structure. The final numbering produced corresponds to a renumbering of the rows and columns of a sparse matrix so as to concentrate non-zero elements of the matrix in a band about the main diagonal.


Book ChapterDOI
01 Jan 1976
TL;DR: Various sparse matrix techniques which have been developed to make the solution of linear algebraic equations more efficient are described and related.
Abstract: The application of the finite element method invariably involves the solution of large sparse systems of linear algebraic equations, and the solution of these systems often represents a significant or even dominant component of the total cost of applying the method. The object of this paper is to describe and relate various sparse matrix techniques which have been developed to make the solution of these equations more efficient.

Journal ArticleDOI
TL;DR: An algorithm for finding the eigenvalues of the generalized eigenvalue problem Ax = {\lambda}Bx for sparse A and B matrices is discussed and its performance is compared to that of the QZ algorithm for some test matrices.
Abstract: An algorithm for finding the eigenvalues of the generalized eigenvalue problem Ax = {\lambda}Bx for sparse A and B matrices is discussed. Its performance is compared to that of the QZ algorithm for some test matrices.

Journal ArticleDOI
TL;DR: A four-stage algorithm for the efficient solution of the standard eigenvalue problem for large sparse matrices is presented and Laguerre's iteration is used to find the remaining eigenvalues.
Abstract: A four-stage algorithm for the efficient solution of the standard eigenvalue problem for large sparse matrices is presented. The matrix whose eigenvalues are desired is first reduced to a block upper triangular form, if possible, to expose those eigenvalues that are readily identified. The reduced matrix Is then scaled and transformed to a sparse Hessenberg matrix with numerical stability control. Laguerre's iteration is then used to find the remaining eigenvalues. Examples are given.

Journal ArticleDOI
TL;DR: In the paper it is proven that a graph can be "split" ("torn") by certain vertex sets (unknowns) such that the overall number of "fill-ins" may still be optimum, although ordering is done in all components separately and almost independently.
Abstract: Some new results are presented concerning the decomposition and pivoting of large sparse systems of linear equations. The paper uses graph theoretical reasoning. Starting point are some results of Rose on triangulated graphs, separation sets, and optimal ordering of sparse matrices. In the paper it is proven that a graph (and thus at the same time the matrix it represents) can be "split" ("torn") by certain vertex sets (unknowns) such that the overall number of "fill-ins" may still be optimum, although ordering is done in all components (submatrices) separately and almost independently. The results may have some significance for very large systems where they may assist in cutting down on set up time. Also some impact on the study of the possible benefits of using more than one floating point processor in parallel may be expected.

Book ChapterDOI
01 Jan 1976
TL;DR: Linear programming in core using a variant of the Bartels-Golub decomposition of the basis matrix will be considered and it will be shown that the “steepest edge” algorithm is practical.
Abstract: Linear programming in core using a variant of the Bartels-Golub decomposition of the basis matrix will be considered. This variant is particularly well-adapted to sparsity preservation, being capable of revising the factorisation without any fill-in whenever this is possible by permutations alone. In addition strategies for colum pivoting in the simplex method itself will be discussed and in particular it will be shown that the “steepest edge” algorithm is practical. This algorithm has long been known to give good results in respect of number of iterations, but has been thought to be impractical.