scispace - formally typeset
Search or ask a question

Showing papers on "Sparse matrix published in 1971"


Journal ArticleDOI
TL;DR: The tableau approach to automated network design optimization via implicit, variable order, variable time-step integration, and adjoint sensitivity computation is described and the bulk of computation and program complexity is located in the sparse matrix routines.
Abstract: The tableau approach to automated network design optimization via implicit, variable order, variable time-step integration, and adjoint sensitivity computation is described. In this approach, the only matrix operation required is that of repeatedly solving linear algebraic equations of fixed sparsity structure. Required partial derivatives and numerical integration is done at the branch level leading to a simple input language, complete generality and maximum sparsity of the characteristic coefficient matrix. The bulk of computation and program complexity is thus located in the sparse matrix routines; described herein are the routines OPTORD and 1-2-3 GNSO. These routines account for variability type of the matrix elements in producing a machine code for solution of Ax=b in nested iterations for which a weighted sum of total operations count and round-off error incurred in the optimization is minimized.

305 citations


Journal ArticleDOI
TL;DR: CANCER is a reasonably general circuit analysis program especially suited to integrated-circuit simulation and derives its efficiency from the exploitation of sparse matrix, adjoint, and implicit integration techniques.
Abstract: CANCER is a reasonably general circuit analysis program especially suited to integrated-circuit simulation. The program provides for the analysis of large circuits in the following four modes of operation: nonlinear d.c., large-signal transient, small-signal a.c., and thermal and shot noise. These subanalysis capabilities are intercoupled appropriately for convenience and efficiency. Internally, CANCER is a very general nodal analysis program that derives its efficiency from the exploitation of sparse matrix, adjoint, and implicit integration techniques.

125 citations


Journal ArticleDOI
TL;DR: A node renumbering algorithm which is specifically directed at preserving the sparse structure of nodal admittance matrices during the solution by Gaussian elimination is described in detail.
Abstract: Sparse matrix storage and solution techniques are used extensively in solving very large systems of hundreds of linear equations which arise in the analysis of multiply interconnected physical systems These techniques have often been overlooked in the analysis of relatively small electric networks even though their use can result in very significant improvements in computer storage requirements and execution times The time savings is particularly noticeable when many solutions for the same circuit with different parameter values are required particular sparse matrix storage, reordering, and solution technique is described A node renumbering algorithm which is specifically directed at preserving the sparse structure of nodal admittance matrices during the solution by Gaussian elimination is described in detail Computer flow charts for the renumbering are included along with specific circuit examples which compare the relative computational effort required for sparse solution versus full matrix solution

83 citations


Journal ArticleDOI
TL;DR: A set of Fortran subroutines has been written for performing various operations on sparse matrices stored in compact form in core so that core storage requirement is reduced for any square matrix less than 66 percent dense.
Abstract: Description It is frequently necessary to manipulate large sparse matrices, for example in electrical network problems. In such cases much time and memory space can be saved if only the nonzero elements are stored. A set of Fortran subroutines has been written for performing various operations on sparse matrices stored in compact form in core. Core storage requirement is reduced for any square matrix less than 66 percent dense. These subroutines have been tested on an ZBM 360/50 using a \"WATFOR\" compiler.

31 citations


Journal ArticleDOI
TL;DR: In this article, the matrix elements of the finite SOn,1 transformations (principal series) can be expressed as a single integral, over a compact domain, of two matrix elements from SOn subgroup and a multiplier.
Abstract: We find a procedure whereby the matrix elements of the finite SOn,1 transformations (principal series) can be expressed as a single integral, over a compact domain, of two matrix elements of the SOn subgroup and a multiplier. In this way we automatically obtain their classification by the canonical chain SOn, 1⊃SOn⊃⋯⊃SO2. Analytic continuation yields the SOn+1 matrix elements in a recursive form. We obtain the asymptotic behavior of the boost matrix elements. The Inonu‐Wigner contraction yields the ISOn representation matrix elements classified by the chain ISOn⊃SOn⊃⋯⊃SO2.

27 citations



Journal ArticleDOI
TL;DR: TIME's straightforward free- format input, its built-in transistor model, the general analysis method, and its very efficient sparse matrix algorithm are described.
Abstract: TIME is a Fortran computer program that can perform low-cost nonlinear d.c. and time-response simulation of bipolar transistor circuits. This paper describes TIME's straightforward free- format input, its built-in transistor model, the general analysis method, and its very efficient sparse matrix algorithm. Example analyses illustrate the computational efficiency of the program.

18 citations


Journal ArticleDOI
TL;DR: An efficient ordering algorithm is presented which tends to minimize the length and execution time of this symbolic code which, when executed, solves a system of linear equations of arbitrary, but particular, sparseness structure.
Abstract: Hachtel et al. [1], [2] have recently proposed sparse matrix methods for nonlinear analysis incorporating an algorithm that generates symbolic code which, when executed, solves a system of linear equations of arbitrary, but particular, sparseness structure. They point out that the execution time and storage requirements of this code are critically dependent upon the ordering selected for processing the network equations and variables, and have themselves developed ordering methods. An efficient ordering algorithm is presented which tends to minimize the length and execution time of this symbolic code. Although the algorithm takes full advantage of the unique character of the sparse system that arises from a certain nonlinear circuit analysis representation, it is flexible enough to be used efficiently for ordering sparse matrices with different characteristics. In particular, it is especially appropriate when solving repetitively the large sparse systems which appear in circuit analysis in general, nonlinear differential and discrete system analysis, and in systems of linear or nonlinear algebraic equations. These problems are often part of larger problems or simulations. The algorithm contains parameters that may be easily adjusted to vary the tradeoff between ordering time and ordering efficiency. The method can (and should) be generalized to include some pivoting for numerical accuracy. Results for a typical nonlinear network indicate considerable improvement over previously published ordering schemes.

14 citations


Journal ArticleDOI
TL;DR: Generalized transforms for decomposing a signal in terms of discrete orthogonal transformation are developed and general relationships for factoring the transform matrices into a product of sparse matrices are derived.
Abstract: Generalized transforms for decomposing a signal in terms of discrete orthogonal transformation are developed. General relationships for factoring the transform matrices into a product of sparse matrices are derived. Efficient algorithms for fast computation of these transforms is a consequence of these sparse matrices. The flow graphs and hence the sequence of computations are identical for all the transforms with only the multipliers as the variables for the different transforms.

10 citations


01 Sep 1971
TL;DR: Use of the computer for design analysis of mechanical dynamic systems can be effectively approached through the development of type-variant programs, particularly sparse matrix methods and the Gear algorithm for numerical integration.
Abstract: : Use of the computer for design analysis of mechanical dynamic systems can be effectively approached through the development of type-variant programs For background this paper reviews the architecture and experience with DAMN, a completed, successful program which performs the dynamic analysis of any two-dimensional mechanical work irrespective of degree-of-freedom, constraint and amount of displacement Then some relatively new methods for automated formulation and numerical analysis are discussed, particularly sparse matrix methods and the Gear algorithm for numerical integration These appear to promise faster and more stable numerical evaluation (Author)

5 citations



11 Oct 1971
TL;DR: A number of computer programs that generate code for the manipulation of sparse matrices for the efficient repetitive solution of simultaneous linear equations which have a common sparsity structure are described.
Abstract: : The report describes the operation and use of a number of computer programs that generate code for the manipulation of sparse matrices. In particular, code generators are described for the efficient repetitive solution of simultaneous linear equations which have a common sparsity structure. Applications to a variety of engineering problems are given. (Author)

Journal ArticleDOI
TL;DR: If A is real, symmetric and positive definite, for what values of does the matrix A-~I have no LDL T decomposition (L unit lower triangular) without pivoting?
Abstract: If A is real, symmetric and positive definite, for what values of does the matrix A-~I have no LDL T decomposition (L unit lower triangular) without pivoting? The answer is apparently not closely related to singularity as illustrated in the following examples.

Journal ArticleDOI
Allan Douglas1
TL;DR: Two proposed strategies for performingGaussian elimination efficiently on a sparse matrix are particular cases of examining the local consequences of vertex elimination in the graph associated with the matrix, and examples show that neither strategy is infallibly optimal, that neither is consistently better than the other, and that any similar local strategy cannot be infallible optimal.
Abstract: Two proposed strategies for performingGaussian elimination efficiently on a sparse matrix are particular cases of examining the local consequences of vertex elimination in the graph associated with the matrix. Examples show that neither strategy is infallibly optimal, that neither is consistently better than the other, and that any similar local strategy cannot be infallibly optimal. The two strategies do not often differ in low-order cases. A considerably more efficient implementation, described in an appendix, makes one of them generally preferable.

Journal ArticleDOI
TL;DR: A general method is presented for the computation of the fast Fourier transform from data stored in external auxiliary memory, for any general radix r = 2nn ≥e external data storage is necessitated whenever the internal computer memory is limited.
Abstract: A general method is presented for the computation of the fast Fourier transform from data stored in external auxiliary memory, for any general radix r = 2nn ≥e external data storage is necessitated whenever the internal computer memory is limited. The general radix requirement arises in the tradeoff in serial FFT processor machines, between the number of passes required to address storage and the number of equivalent sparse matrix multiplicative operations required to compute the fast Fourier transform.

Journal ArticleDOI
01 Jan 1971
TL;DR: An implementation of Gaussian reduction of complex matrices by sparse matrix techniques is presented and has been specifically tailored to perform frequency analysis of linear circuits within an on-line circuit-design programming system.
Abstract: An implementation of Gaussian reduction of complex matrices by sparse matrix techniques is presented. The computer program described has been specifically tailored to perform frequency analysis of linear circuits within an on-line circuit-design programming system.