scispace - formally typeset
Search or ask a question

Showing papers on "Sparse matrix published in 1970"


Journal ArticleDOI
TL;DR: This is a concise critical survey of the theory and practice relating to the ordered Gaussian elimination on sparse systems and a new method of renumbering by clusters is developed, and its properties described.
Abstract: This is a concise critical survey of the theory and practice relating to the ordered Gaussian elimination on sparse systems. A new method of renumbering by clusters is developed, and its properties described. By establishing a correspondence between matrix patterns and directed graphs, a sequential binary partition is used to decompose the nodes of a graph into clusters. By appropriate ordering of the nodes within each cluster and by selecting clusters, one at a time, both optimal ordering and a useful form of matrix banding are achieved. Some results pertaining to the compatibility between optimal ordering for sparsity and the usual pivoting for numerical accuracy are included.

90 citations


Journal ArticleDOI
TL;DR: This study suggests quantitatively how rapidly sparse matrices fill up for increasing densities, and emphasizes the necessity for reordering to minimize fill-in.
Abstract: A comparison in the context of sparse matrices is made between the Product Form of the Inverse PFI (a form of Gauss-Jordan elimination) and the Elimination Form of the Inverse EFI (a form of Gaussian elimination). The precise relation of the elements of these two forms of the inverse is given in terms of the nontrivial elements of the three matrices L, U, U-1 associated with the triangular factorization of the coefficient matrix A; i.e., A = L. U, where L is lower triangular and U is unit upper triangular. It is shown that the zero- nonzero structure of the PFI always has more nonzeros than the EFI. It is proved that Gaussian elimination is a minimal algorithm with respect to preserving sparseness if the diagonal elements of the matrix A are nonzero. However, Gaussian elimination is not nec- essarily minimal if A has some zero diagonal elements. The same statements hold for the PFI as well. A probabilistic study of fill-in and computing times for the PFI and EF sparse ma- trix algorithms is presented. This study suggests quantitatively how rapidly sparse matrices fill up for increasing densities, and emphasizes the necessity for reordering to minimize fill-in. I. Introduction. A sparse matrix is a matrix with very few nonzero elements. In many applications, a rough rule seems to be that there are O(N) nonzero entries; typically, say 2 to 10 nonzero entries per row. If the dimension N of the matrix is not large, then there is no compelling reason to treat sparse matrices differently from full matrices. It is when N becomes large and one attempts computations with the sparse matrix that it becomes necessary to take advantage of the zeros. The reason for this is obvious: there is a storage requirement of the order of N2 and arithmetic operations count of the order of N3 for many matrix algorithms using the full matrix. On the other hand, by storing only nonzero quantities and using logical operations to decide when an arithmetic operation is necessary, the storage requirement and arithmetic operations count can be reduced by a factor of N in many instances. Of course, this not only becomes a sizable savings of computer time, but also dictates whether or not some problems can be attempted. Computations with sparse matrices are not new. Iterative techniques for these matrices, especially those related to the solution of partial differential equations have been extensively developed (1). Sparse matrix methods for solving linear equa- tions by direct methods have been used for a long time in linear programming and there is a large body of literature, computational experience, programs, and artfulness, which has been built up in this area (2)-(15). In most linear programming codes, the product form of the inverse (PFI) is the method used to solve linear equations (16)-(19), although there are exceptions (20)-(21). Methods for scaling, pivoting for accuracy and sparseness, structuring data, and handling input-output have been extensively developed (22)-(72). However, there do not seem to exist rigorous results

63 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the inverse of a tridiagonal matrix with positive off-diagonal elements is a matrix of the form (i.e., the matrix is a tridimensional matrix with a positive offdiagonal element).
Abstract: During an investigation into the convergence properties of natural splines it was found useful to have bounds on the inverse of a tridiagonal matrix with positive off-diagonal elements. Matrices of this type arise in other branches of numerical analysis, in particular in the discrete analogue of certain second-order differential operators, and so it may be useful to record these results. The matrix is

49 citations


Journal ArticleDOI
TL;DR: The number of multiplications required for matrix multiplication, for the triangular decomposition of a matrix with partial pivoting, and for the Cholesky decomposition, can be roughly halved if Winograd's identity is used to compute the inner products involved.
Abstract: The number of multiplications required for matrix multiplication, for the triangular decomposition of a matrix with partial pivoting, and for the Cholesky decomposition of a positive definite symmetric matrix, can be roughly halved if Winograd's identity is used to compute the inner products involved. Floating-point error bounds for these algorithms are shown to be comparable to those for the normal methods provided that care is taken with scaling.

36 citations


DOI
01 Jan 1970
TL;DR: The Fast Multipole Method is used to overcome difficulties and apply it to the 3-D analysis of electron guns, which involves both solving BEM and n-body problems.
Abstract: Although BEM enjoys the boundary only discretization, the computational work and memory requirements become prohibitive for large-scale 3-D problems due to its dense matrix formulation. The computation of n-body problems, such as in charged particle simulation, has a similar inherent difficulty. In this paper, we use the Fast Multipole Method to overcome these difficulties and apply it to the 3-D analysis of electron guns, which involves both solving BEM and n-body problems.

32 citations


Journal ArticleDOI
TL;DR: Gaussian elimination and subsequent back substitution is presented for its implementation to minimize the amount of central computer memory required; provide a more flexible means of manipulating large matrices; and dramatically reduce computer time.
Abstract: The most efficient means of solving most systems of linear equations arising in structural analysis is by Gaussian elimination and subsequent back substitution. A method is presented for its implementation. Direct solutions are obtained with sparse matrix factors which preserve the operations of the Gaussian elimination for repeat solutions. The method together with techniques for its application are graphically described. Its use in many types of engineering problems requiring solutions to systems of linear equations will: (1) Minimize the amount of central computer memory required; (2) provide a more flexible means of manipulating large matrices; and (3) dramatically reduce computer time.

21 citations


01 Aug 1970
TL;DR: Theoretic and experimental results on method of conjugate gradients for non linear programming and relations between properties of the graph and the eigenvalues of its adjacency matrix are presented.
Abstract: : Contents: Integer and linear programming; Theoretic and experimental results on method of conjugate gradients for non linear programming; Iterative procedures of finding roots of functions; Bounds for eigenvalues and complex matrices; Relations between properties of the graph and the eigenvalues of its adjacency matrix; and Sparse matrices.

17 citations


Patent
09 Dec 1970
TL;DR: In this paper, an automated process for network optimization which fully incorporates sparse matrix techniques is presented. But the system has further application to any generalized system which may be described mathematically by a set of algebraic and differential equations.
Abstract: An automated process for network optimization which fully incorporates sparse matrix techniques. The system has further application to any generalized system which may be described mathematically by a set of algebraic and differential equations. Operating on a user input which defines X electrical network, the system generates lists of formated data representing a set of algebraic and differential equations. The variables in the equations are solved for by a process of Crout elimination, and each resulting operation in the Crout algorithm is identified in accordance with one of a plurality of variablility types. Then, a separate Solve program code for each variability type is compiled by the system. The individual Solve programs are then executed in a hierarchical loop sequence during the solution of the network design problem by means of a Newtonian iteration which is expressed as delta A(X)/ delta X Delta X -A(x). Where A(x) represents a tableau vector consisting of a set of algebraic and differential equations for the electrical network.

17 citations


01 Jan 1970
TL;DR: In this article, various algorithms are constructed to transform arbitrary symmetric positive definite sparse matrices, as well as matrices in band form, doubly bordered band form and doubly bounded block diagonal form.
Abstract: Transformations of sparse linear systems by row-column permutations are considered and various algorithms are constructed to transform arbitrary symmetric positive definite sparse matrices, as well as matrices in band form, doubly bordered band form, and doubly bordered block diagonal form.

12 citations


Journal ArticleDOI
TL;DR: A method for eliminating redundancy is presented which is best suited for large matrix description language, given inBackus normal form, which fits to the data structure of the method proposed.
Abstract: In many practical problems matrices become extremely large. So their representations grow up to data structure problems. It is necessary to arrange their elements and substructures for effective data processing. A method for eliminating redundancy is presented which is best suited for large, e. g. sparse matrices. A matrix description language, given inBackus normal form, is chosen which fits to the data structure of the method proposed.

6 citations


Journal ArticleDOI
TL;DR: In this paper, a method of inversion of the nodal-admittance matrix in symbolic (i.e. rational-function) form is presented, which demonstrates the suppression of common factors.
Abstract: A method of inversion of the nodal-admittance matrix in symbolic (i.e. rational-function) form is presented. The derivation demonstrates the suppression of common factors.

Journal ArticleDOI
TL;DR: The expected number of arithmetic operations required to compute the table of factors in the LU decomposition of an nth-order sparse symmetric incidence matrix is shown to increase quadratically with the number of branches incident at any node, but only linearly with thenumber of nodes.
Abstract: The expected number of arithmetic operations required to compute the table of factors in the LU decomposition of an nth-order sparse symmetric incidence matrix is shown to increase quadratically with the number of branches incident at any node, but only linearly with the number of nodes. This same result is observed if the sparse incidence matrix is symmetric only in pattern. It is assumed that the same number of branches is incident at every node and that the nodes are ordered randomly.

Journal ArticleDOI
TL;DR: In this article, a method for the stability analysis of structures using a combination of matrix iteration and matrix decomposition is presented, where two critical loads are present whose absolute values are relatively close or identical.
Abstract: A method is presented for the stability analysis of structures using a combination of matrix iteration and matrix decomposition. The method is extented to stability analysis of structures in which two critical loads are present whose absolute values are relatively close or identical.

Journal ArticleDOI
TL;DR: An efficient parallel technique for reducing sparse matrices that can be applied to analysis tables is investigated and a very compact form results which will contribute to a greatly reduced time when accessing the given data structure.
Abstract: This paper investigates an efficient parallel technique for reducing sparse matrices that can be applied to analysis tables. This kind of matrices take up a great amount of memory space by the zero entries and, hence, a subtle compaction scheme is necessary. The benefit of the parallel approach introduced herein is that a very compact form results which will contribute to a greatly reduced time when accessing the given data structure.

DOI
01 Jan 1970
TL;DR: It is shown that using the zero moment properties and the compact support properties of the multi-wavelet basis, the h-adaption proceeds in an identical fashion for both the untruncated and truncated system matrices.
Abstract: We present a sparse h-adaptive boundary integral equation solution for the 2D Laplace equation using the multi-wavelets of Alpert. we show that using the zero moment properties and the compact support properties of the multi-wavelet basis we can produce an auto-refining method. Furthermore using the same properties of the multi-wavelets on the matrices, they can be made sparse. Unforunately the structure of the sparse matrices makes it very difficult to make use of fast iterative solvers. We show that the h-adaption proceeds in an identical fashion for both the untruncated and truncated system matrices (even with very severe truncation of modest sized system matrices).

DOI
01 Jan 1970
TL;DR: In this paper, two different techniques, namely compact support and multizone decomposition, are used to improve the conditioning of the coefficient matrix which is a full matrix due to the use of the global radial basis functions.
Abstract: This paper presents the application of the radial basis functions (RBF) for solving a set of non-linear hydrodynamics model for marine environments. Two different techniques, namely compact support and multizone decomposition, are used to improve the conditioning of the resultant coefficient matrix which is a full matrix due to the use of the global radial basis functions. The idea of the compactly supported radial basis function (CSRBF) is to reduce the full matrix to a banded sparse matrix. The multizone approach is similar to the commonly used domain decomposition. The resulting sparse or smaller matrix has shown to improve in both stability and computational efficiency. Both techniques are verified by comparing with the global multiquadric radial basis function applied to a linear and a real non-linear two-dimensional hydrodynamic model in simulating the tidal current and water flow circulation patterns.

Journal ArticleDOI
V. A. LoDato1
TL;DR: An efficient permutation algorithm is developed for a certain class of matrices; namely, those that possess constant and variable elements and results show that an optimum bandwidth is maintained for the L/U decomposition of the matrix.
Abstract: An efficient permutation algorithm is developed for a certain class of matrices; namely, those that possess constant and variable elements. A partitioning technique is employed that separates the constant portion from the variable part. A number of tests are developed to maintain an optimum bandwidth. The algorithm has been tested for several hundred sparse matrices and results show that an optimum bandwidth is maintained for the L/U decomposition of the matrix.