scispace - formally typeset
Search or ask a question

Showing papers on "Sparse matrix published in 1980"



Journal ArticleDOI
TL;DR: Three generalizations of conjugate-gradient acceleration are described which are designed to speed up the convergence of basic iterative methods which are not necessarily symmetrizable.

291 citations


Journal ArticleDOI
TL;DR: An implementation of the Reverse Cuthill-McKee (RCM) algorithm whose run-time complexity is proved to be linear in the number of nonzeros in the matrix is provided.
Abstract: The Reverse Cuthill-McKee (RCM) algorithm is a method for reordering a sparse matrix so that it has a small envelope. Given a starting node, we provide an implementation of the algorithm whose run-time complexity is proved to be linear in the number of nonzeros in the matrix. Numerical experiments are provided which compare the performance of the new implementation to a good conventional implementation.

90 citations


Journal ArticleDOI
Wing1, Huang2
TL;DR: Hu's level scheduling strategy is applied to examples of sparse matrix equations with surprisingly good results.
Abstract: The solution process of Ax = b is modeled by an acyclic directed graph in which the nodes represent the arithmetic operations applied to the elements of A, and the arcs represent the precedence relations that exist among the operations in the solution process. Operations that can be done in parallel are identified in the model and the absolute minimum completion time and lower bounds on the minimum number of processors required to solve the equations in minimal time can be found from it. Properties of the model are derived. Hu's level scheduling strategy is applied to examples of sparse matrix equations with surprisingly good results. Speed-up using parallel processing is found to be proportional to the number of processors when it is 10-20 percent of the order of A.

85 citations



Journal ArticleDOI
TL;DR: It is shown that all recently proposed sparse matrix algorithms for network solution by tearing belong to a set of algorithms which is derived by applying block matrix elimination to a partitioned system of network equations.
Abstract: Network solution by tearing consists of partitioning the network into subnetworks, solving each subnetwork separately, and then combining the subnetwork solutions to obtain the solution of the entire network. In this paper it is shown that all recently proposed sparse matrix algorithms for network solution by tearing belong to a set of algorithms which is derived by applying block matrix elimination to a partitioned system of network equations. The computational requirements of the algorithms are determined and compared. Equation sparsity is considered at all levels in the solution process. In particular, the structures of the equations at the subnetwork level as well as the interconnection level are analyzed in detail.

38 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed models for an extended set of components encountered in water-distribution systems based on graph theoretic concepts, such as check valves, pressure-reducing valves, and booster pumps.
Abstract: Models for an extended set of components encountered in water-distribution systems are developed based on graph theoretic concepts. It is shown that components such as check valves, pressure-reducing valves, and booster pumps may be included in the system of equations in which junction (nodal) heads are being solved by selecting appropriate admittance values (similar to those associated with simple pipes) indicated by the current operating conditions of the valves. The formulation-solution procedure is based on sparse matrix methods that leads to efficient and economical analysis of water-distribution systems on a digital computer.

26 citations


16 Jun 1980
TL;DR: This paper proposes a method which eliminates the need for recursive computations in the solution of potential equations in late time electrostatic codes by replacing them with a number of non-recursive operations.
Abstract: : The incomplete Cholesky decomposition and the subsequent iterative solution by the conjugate gradient method has been described recently by D. Kershaw. The drawback of a triangular decomposition on a vector machine is the need for recursive computations. This paper proposes a method which eliminates the need for recursive computations. They are replaced by a number of non-recursive operations. This method can be utilized in the solution of potential equations in late time electrostatic codes. (Author)

17 citations


Journal ArticleDOI
TL;DR: A detailed analysis of the impletmentation of some sparse matrix techniques in the integration of linear systems of ordinary differential equation is presented and the possibility ofimproving the results by the use of iterative refinement and large values of a special parameter is discussed.

13 citations


Proceedings ArticleDOI
01 Dec 1980
TL;DR: In this paper, a streamlined Gram-Schmidt orthogonalization algorithm is proposed to reduce the complexity of time propagation of the Kalman filter covariance matrix in U-D covariance factorization.
Abstract: Time propagation of the Kalman filter covariance matrix involves an operation of the form ?P?T where for many applications ? is a sparse transition matrix. When the filter implementation employs U-D covariance factorization (i.e., recursions for U and D are used, where P = UDUT with U unit upper triangular and D diagonal) the corresponding time propagation involves W = ?U. Both the ?P?T and ?U computations can exploit transition matrix sparseness. If, however, the structure of W is not exploited, the computation involved with transforming W to an equivalent triangular form can be prohibitively expensive. The contribution of this paper is a streamlined Gram-Schmidt orthogonalization algorithm that can dramatically reduce UD time propagation computation costs.

12 citations


Dissertation
01 Jan 1980
TL;DR: A software package on the storage and manipulation of sparse matrices, which consists of basic matrix operations (i.e. addition, multiplication, etc.) and the solution of linear systems by iterative methods.
Abstract: The algebraic eigenvalue problem occurring in a variety of problems in the Natural, Engineering and Social Sciences areas can with some advantage be solved by matrix methods. However, these problems become more difficult to handle when the matrices involved are large and sparse because the storage and manipulations of these types of matrices have to be achieved in such ways that firstly, no storage is wasted by retaining the zero elements of the matrix and secondly, saving valuable computer time by not operating on the zero elements when unnecessary. For this purpose, we have previously developed a software package on the storage and manipulation of sparse matrices, which consists of basic matrix operations (i.e. addition, multiplication, etc.) and the solution of linear systems by iterative methods. However, in that work we encountered a great deal of difficulty in handling the operations which generate non-zero elements during processes such as the Gaussian elimination process. [Continues.]

Journal ArticleDOI
TL;DR: A description of the hierarchic decomposition method (i.e. multiple decomposition of a decomposed network) which can be applied in computer analysis of electronic networks is given and a new universal algorithm for generation of k-trees is presented.
Abstract: SUMMARY This paper concerns a new approach to the topological analysis of linear networks. It gives a description of the hierarchic decomposition method (i.e. multiple decomposition of a decomposed network) which can be applied in computer analysis of electronic networks. A new universal algorithm for generation of k-trees is presented. For many years topological methods have been used to analyse very small networks only. This is a result of a very fast rise of the number of expressions in the topological formula for the determinant of the network matrixr with the growth of the network size. This remark concerns both linear and flow graphs. Generally known computer programs of topological analysis (for example SNAP, NASAP, TAPLAN) allow analysis of networks of up to 30 nodes (practically, 10 nodes and 20 branches'). The difficulties mentioned above have brought about a decrease in the interest in topological methods. The rapid development of numerical methods of solving large systems of equations described by sparse matrices of coefficients, has resulted in a decrease in the competitiveness of topological methods in relation to numerical methods. The elaboration of a method and a program of topological analysis by a direct decomposition method (see Reference 3, program ADEN), has allowed analysis of networks of up to 50 nodes. However, this method has many significant restrictions, and it can hardly be considered convenient for the user. A brief description of the method and algorithms of topological analysis by the so-called hierarchic decomposition method will be presented in this paper. Some results of the hierarchic decomposition method are presented in Reference 4.

Journal ArticleDOI
TL;DR: The minimum bandwidth of a sparse matrix is proved to be a kind of the width of a graph which is obtained from the matrix in question, and a new renumbering algorithm which can reduce the bandwidth to the minimum or near minimum value is presented.

01 Oct 1980
TL;DR: In this paper, a variant of the ellipsoid update is developed, to take advantage of the range constraints that often occur in linear programs (i.e., constraints of the form l or = aTx or = u, where u - l is reasonably small).
Abstract: : The ellipsoid algorithm associated with Shor, Khachiyan and others has certain theoretical properties that suggest its use as a linear programming algorithm. Some of the practical difficulties are investigated here. A variant of the ellipsoid update is first developed, to take advantage of the range constraints that often occur in linear programs (i.e., constraints of the form l or = aTx or = u, where u - l is reasonably small). Methods for storing the ellipsoid matrix are then discussed for both dense and sparse problems. In the large-scale case, a major difficulty is that the desired ellipsoid cannot be represented compactly throughout an arbitrary number of iterations. Some schemes are suggested for economizing on storage, but any guarantee of convergence is effectively lost. At this stage there remains little room for optimism that an ellipsoid-based algorithm could complete with the simplex method on problems with a large number of variables. (Author)

ReportDOI
08 Jun 1980
TL;DR: A single topic of general sparse matrix solution using scalar processors may be broken into specialized areas of study when implementation on vector architectures is considered.
Abstract: : Single topic of general sparse matrix solution using scalar processors may be broken into specialized areas of study when implementation on vector architectures is considered. First, highly sparse matrices, usually represented ODE/algebraic-modeled systems, are easily decoupled by re-ordering. At a minimum, locally-decoupled equations may be solved in pipelined scalar mode; if the decoupled subsystems can be arranged (a) to have identical sparsity, and (b) to be stored a constant stride apart, then a simultaneous sparse solver may be invoked and a vector solution obtained. As sparse systems become locally coupled - as occurs in finite element and finite difference problems - then vectors are easily defined within the coupled subsystems. It is worth making a further distinction between: (a) intra-nodal or intra-element coupling, where the dimension of dense submatrices is proportional to the number of unknowns/node or unknowns/finite element, and (b) inter-nodal or inter-element, where the coupling between grid nodes or finite elements determines the vector length. Banded and profile matrices result from the latter. The associated vector lengths are the products of the number of unknown/node (element) and the number of coupled nodes. These lengths are therefore always longer than in the former case, so that common bandsolvers offer the highest performance of any sparse solvers.

Journal ArticleDOI
TL;DR: A comparison of the computation speed and storage requirements of three algorithms for the determination of the periodic steadystate response of nonlinear circulits, namely the Newton, extrapolation, and gradient methods, shows the latter to be the most attractive if an efficient function minimization routine is available.
Abstract: A comparison of the computation speed and storage requirements of three algorithms for the determination of the periodic steadystate response of nonlinear circulits, namely the Newton, extrapolation, and gradient methods, shows the latter to be the most attractive if an efficient function minimization routine is available. The gradient algorithm equations have been derived on the basis of a general tableau representation of the network equations which, in contrast to the recently reported state variable formulation, lends itself to straightforward implementation in modern, network, transient analysis programs which use sparse matrix techniques. The algorithm has been implemented with one such program and tested on several circuits using two optimization routines. Satifactory results are obtained with the variable metric routine but convergence is sensitive to scaling and the initial time.

Book
01 Jan 1980
TL;DR: A submitted manuscript is the version of the article upon submission and before peer-review as mentioned in this paper, while a published version is the final layout of the paper including the volume, issue and page numbers.
Abstract: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.

Journal ArticleDOI
TL;DR: A modification of the Danilewski method is presented, permitting the solution of the eigenvalue problem for a constant sparse matrix of large order to be reduced to the solution for a polynomial matrix of lower order.
Abstract: A modification of the Danilewski method is presented, permitting the solution of the eigenvalue problem for a constant sparse matrix of large order to be reduced to the solution of the same problem for a polynomial matrix of lower order. Certain solution algorithms are proposed for a partial eigenvalue problem for the polynomial matrix. Questions of the realization of the algorithms on a model PRORAB computer are examined.

Journal ArticleDOI
TL;DR: A quasi-Newton method for unconstrained minimization is presented, which uses a Cholesky factorization of an approximation to the Hessian matrix, which reduces storage requirements and simplifies the calculation of the search direction.
Abstract: A quasi-Newton method for unconstrained minimization is presented, which uses a Cholesky factorization of an approximation to the Hessian matrix. In each step a new row and column of this approximation matrix is determined and its Cholesky factorization is updated. This reduces storage requirements and simplifies the calculation of the search direction. Precautions are taken to hold the approximation matrix positive definite. It is shown that under usual conditions the method converges superlinearly or evenn-step quadratic.

Journal ArticleDOI
TL;DR: In this paper an efficient algorithm is provided for determining the block structure of the Cholesky factor of a partitioned matrix A, along with some bounds on the execution time of the algorithm.

Patent
18 Nov 1980
TL;DR: A character compaction and generation method adapted to the generation of complex characters such as Kanji characters is described in this article, where each character in the complex character set is compacted and stored in memory (50) one time only, with decompaction being performed each time a given character is to be generated, with the original character being reconstructed for printing or display from the compacted character defined in the sparse matrix.
Abstract: A character compaction and generation method and apparatus which is particularly adapted to the generation of complex characters such as Kanji characters. A dot matrix (Fig. 2-1) defining a given character is compacted into a sparse matrix (Fig. 2-7), with the original character being reconstructed for printing or display from the compacted character defined in the sparse matrix. Each character in the complex character set is compacted and stored in memory (50) one time only, with decompaction being performed each time a given character is to be generated. A set of symbols are defined to represent different patterns which occur frequently in the entire complex character set. Different combinations of the symbols define a given character. The information stored for each sparse matrix representing a given character is comprised of eacn symbol in the sparse matrix, its position, and its size parameter if the symbol represents a family of patterns which differ only in size. Three groups (A, B, C,) of different patterns are defined which occur frequently in the complex character set, namely, a first group (A) which has a fixed size for each pattern, a second group (B) which has one size parameter which must be specified for each pattern, and a third group (Cl which has a plurality of size parameters which must be specified for each pattern. Certain ones of the characters have elements of different patterns which overlap, such that the character may be encoded utilizing less symbols, and according less bytes of data. A given pattern may be generated atthe same time another pattern is being decoded.

01 Mar 1980
TL;DR: This paper describes GEARS, a package of Fortran subroutines designed to solve stiff systems of ordinary differential equations of the form dy/dt = f(y,t), where the Jacobian matrices J = par.
Abstract: This paper describes GEARS, a package of Fortran subroutines designed to solve stiff systems of ordinary differential equations of the form dy/dt = f(y,t), where the Jacobian matrices J = par. delta f/par. delta y are large and sparse. The integrator is based on the stiffly stable methods due to Gear, and this approach leads to a sparse system of nonlinear equations at each time step. These are solved with a modified Newton iteration, using one of two separate sparse matrix packages to solve the sparse linear equations that arise. This paper describes the package, in some detail, discusses a number of issues that affected the design of the package, and presents a numerical example to illustrate the effectiveness of the package. 1 figure, 1 table.

Journal ArticleDOI
TL;DR: A survey of methods currently available for processing sparse matrices in a digital computer in the solution of linear algebraic equations and the eigenproblem.

Journal ArticleDOI
TL;DR: The article is devoted to the computer realization of well-known computational algorithms of linear algebra with sparse matrices, as well as sub-schemes realizing the normalized expansion of a matrix and the modified Danilewski method.
Abstract: The article is devoted to the computer realization of well-known computational algorithms of linear algebra with sparse matrices Formal programs of operations of the second kind (subschemes) over sparse matrices are derived, as well as sub-schemes realizing the normalized expansion of a matrix, the modified Danilewski method, the stepwise choice of the leading element in a process of Gaussian elimination type, the search for approximations to an eigenvalue by the method of traces, etc

Journal ArticleDOI
TL;DR: In this paper, an implicit iterative method for improving the accuracy of the inverse matrix is presented and shown to possess superior convergence properties over the well known quadratically convergent Hotelling method.
Abstract: An implicit iterative method for improving the accuracy of the inverse matrix is presented and shown to possess superior convergence properties over the well known quadratically convergent Hotelling method. Numerical examples are included to illustrate the new method. It is proposed that the new techniques may be useful in statistical analyses where matrix inverses are required.

S. Harano1
01 Oct 1980
TL;DR: A "nontransmit" packing routine was added to NASTRAN to allow matrix data to be refered to directly from the input/output buffer if data addresses have once been received.
Abstract: A "nontransmit" packing routine was added to NASTRAN to allow matrix data to be refered to directly from the input/output buffer. Use of the packing routine permits various routines for matrix handling to perform a direct reference to the input/output buffer if data addresses have once been received. The packing routine offers a buffer by buffer backspace feature for efficient backspacing in sequential access. Unlike a conventional backspacing that needs twice back record for a single read of one record (one column), this feature omits overlapping of READ operation and back record. It eliminates the necessity of writing, in decomposition of a symmetric matrix, of a portion of the matrix to its upper triangular matrix from the last to the first columns of the symmetric matrix, thus saving time for generating the upper triangular matrix. Only a lower triangular matrix must be written onto the secondary storage device, bringing 10 to 30% reduction in use of the disk space of the storage device.

Journal ArticleDOI
TL;DR: In this paper, it is shown that in practice it is possible to work with a matrix of the system the bandwidth of which is reduced, and a simple numerical example illustrates the discussion.
Abstract: The matrix of the system of linear algebraic equations, arising in the application of the finite element method to one-dimensional problems, is a bandmatrix. In approximations of high order, the band is very wide but the elements situated far from the diagonal of the matrix are negligibly small as compared with the diagonal elements. The aim of the paper is to show on a model problem that in practice it is possible to work with a matrix of the system the bandwidth of which is reduced. A simple numerical example illustates the discussion.

Proceedings ArticleDOI
01 Jan 1980
TL;DR: In this paper, a generalized facility for describing any kind of numerical database and its mapping to storage is provided via nonprocedural Stored-Data Description and Mapping Languages (SDDL and SDML).
Abstract: Numerical data bases arise in many scientific applications to keep track of large sparse and dense matrices. Unlike the many matrix data storage techniques available for incore manipulation, very large matrices are currently limited to a few compact storage schemes on secondary devices, due to the complex underlying data management facilities. This paper proposes an approach for generalized numerical database management that would promote physical data independence by relieving users from the need for knowledge of the physical data organization on the secondary devices.Our approach is to describe each of the storage techniques for dense and sparse matrices by a physical schema, which encompasses the corresponding access path, the encoding to storage structures, and the file access method. A generalized facility for describing any kind of numerical database and its mapping to storage is provided via nonprocedural Stored-Data Description and Mapping Languages (SDDL and SDML). The languages are processed by a Generalized Syntax-Directed Translation Scheme (GSDTS) to automatically generate FORTRAN conversion programs for creating or translating numerical database from one compact storage scheme to another. The feasibility of the generalized approach with regard to our current implementation is also discussed.


Journal ArticleDOI
TL;DR: When implementing Algorithm 408 on a CDC Cyber 76-12 and a Cyber 73-16, the errors noted by Lawrence [2] are corrected and the dimensional parameters were incomplete and have been completed.
Abstract: When implementing Algorithm 408 on a CDC Cyber 76-12 and a Cyber 73-16, the errors noted by Lawrence [2] are corrected. In ARSPMX the dimensional parameters were incomplete and have been completed. Thus it is possible to add, for example, two sparse matrices having different numbers of nonzero elements. There is another severe error in ADSPMX, as pointed out by Sipala [3]: when adding two elements whose sum is zero, ADSPMX gives an incorrect result. For example, when