scispace - formally typeset
Search or ask a question

Showing papers on "Sparse matrix published in 1969"


Proceedings ArticleDOI
26 Aug 1969
TL;DR: A direct method of obtaining an automatic nodal numbering scheme to ensure that the corresponding coefficient matrix will have a narrow bandwidth is presented.
Abstract: The finite element displacement method of analyzing structures involves the solution of large systems of linear algebraic equations with sparse, structured, symmetric coefficient matrices. There is a direct correspondence between the structure of the coefficient matrix, called the stiffness matrix in this case, and the structure of the spatial network delineating the element layout. For the efficient solution of these systems of equations, it is desirable to have an automatic nodal numbering (or renumbering) scheme to ensure that the corresponding coefficient matrix will have a narrow bandwidth. This is the problem considered by R. Rosen1. A direct method of obtaining such a numbering scheme is presented. In addition several methods are reviewed and compared.

1,518 citations


Journal ArticleDOI
TL;DR: Three orthogonalization techniques to correct errors in the computeddirection cosine matrix are introduced and these techniques were tested experimentally and were compared with a method used by the Honeywell Corporation.
Abstract: Three orthogonalization techniques to correct errors in the computeddirection cosine matrix are introduced. One of these techniques is avectorial technique based on the fact that the three rows of a directioncosine matrix constitute an orthonormal set of vectors in aree-threedimensional space. The other two iterative techniques are based onthe fact that the inverse and transpose of an orthogonal matrix areequal. In computing a time-varying direction cosine matrix computationalional errors are accompanied by the loss of the orthogonaliterty prop-rty of the matrix. When one of these three techniques is useo re-restore the orthogonality of the matrix, the computational errors arealso corrected. These techniques were tested experimentally and theresults, given in this paper, were compared with a method used by the Honeywell Corporation.

49 citations



Journal ArticleDOI
TL;DR: In this paper, the authors used matrix norms that satisfy the Schwarz inequality to determine upper bounds for the error in some common computations involving the matrix exponential function, such as matrix multiplication.
Abstract: Matrix norms that satisfy the Schwarz inequality are used to determine upper bounds for the error in some common computations involving the matrix exponential function.

19 citations


01 Jan 1969
TL;DR: Ten pivot selection rules for representing the inverse of a sparse basis in triangularized product form are compared and one of the rules yield inverses that were only slightly less sparse than the original basis.
Abstract: : The authors empirically compared ten pivot selection rules for representing the inverse of a sparse basis in triangularized product form. On examples drawn from actual applications, one of the rules yield inverses that were only slightly less sparse than the original basis. The rule was used in the M5 mathematical programming system and has resulted in substantial reduction in running time.

16 citations


Journal ArticleDOI
TL;DR: Compared with the other areas discussed here tonight, the "state of the art" in matrix computations is fairly advanced; there are reasonably well tested and well documented algorithms available in the literature, or soon to be available, for solving several standard matrix problems.
Abstract: Compared with the other areas discussed here tonight, the \"state of the art\" in matrix computations is fairly advanced. That is, there are reasonably well tested and well documented algorithms available in the literature, or soon to be available, for solving several standard matrix problems. Research on more esoteric problems is well underway and reliable algorithms have been developed, although they are not widely available. Many of the algorithms I will mention are in Algol. But for the most part they are so straightforward and use so few special Algol features that their translation (by hand) to any other language is very easy. In many areas this is not always true, but I feel that it is true with well-written routines for matrix problems. As I see it, the basic problems and the recommended algorithms are the following: Elimination with partial pivoting is almost universally used. It should be organized to save the multipliers so that additional right hand sides can be processed later. This is also called LU decomposition. The choice between Gaussian and Crout variants and between actual row interchanges and indexing vector is system dependent. Some kind of scaling may be included, although nobody knows exactly what kind is best. In floating point, the scaling should be implicit: the matrix is left alone and the scale factors are merely used in the pivot selection. This eliminates the need for using exact powers of the machine number base.

3 citations


Journal ArticleDOI
TL;DR: A unified approach to linear circuit analysis is described which apart from its simplicity reduces the time required for circuit analysis on an undergraduate electronics course.
Abstract: A unified approach to linear circuit analysis is described which apart from its simplicity reduces the time required for circuit analysis on an undergraduate electronics course. Representation of a network by its indefinite admittance matrix is well established and for a w-node network would result in a n × n matrix. Reduction of this matrix to a 2 × 2 matrix is described and an easy to memorize procedure in terms of the input and output nodes is given.