scispace - formally typeset
Search or ask a question

Showing papers on "Sparse matrix published in 1982"


Journal ArticleDOI
TL;DR: The basic approach to modifications of geometric models, a procedure for significant reduction of the number of constraint equations to be solved, and the effect of sparse matrix methods in reducing the time required to solve the equations are presented.
Abstract: Systems for computer-aided mechanical design use geometric models for drafting, analysis and programming of NC machines. Because design is iterative in nature, the topology, geometry or dimensioning of a geometric model must be modified many times during the design cycle. The effectiveness of future CAD systems will depend in large part upon the ease with which geometric models can be created and modified. This paper presents the results of a research effort to develop flexible procedures for the definition and modification of geometric models. A central idea of this effort is that dimensions, such as appear on a mechanical drawing, are a natural descriptorr of geometry and provide the most appropriate means for altering a geometric model. A procedure is described by which geometry is determined from a set of dimensions. The geometry corresponding to an altered dimension is found through the simultaneous solution of the set of constraint equations. Presented in this paper are the basic approach to modifications of geometric models, a procedure for significant reduction of the number of constraint equations to be solved, and the effect of sparse matrix methods in reducing the time required to solve the equations.

330 citations


Journal ArticleDOI
TL;DR: An implementation of sparse ${LDL}^T$ and LU factorization and back-substitution, based on a new scheme for storing sparse matrices, is presented and appears to be as efficient in terms of work and storage as existing schemes.
Abstract: An implementation of sparse ${LDL}^T$ and LU factorization and back-substitution, based on a new scheme for storing sparse matrices, is presented. The new method appears to be as efficient in terms of work and storage as existing schemes. It is more amenable to efficient implementation on fast pipelined scientific computers.

202 citations


Journal ArticleDOI
TL;DR: A comprehensive set of test problems will lead to a better understanding of the range of structures in sparse matrix problems and thence to better classification and development of algorithms.
Abstract: The development, analysis and production of algorithms in sparse linear algebra often requires the use of test problems to demonstrate the effectiveness and applicability of the algorithms. Many algorithms have been developed in the context of specific application areas and have been tested in the context of sets of test problems collected by the developers. Comparisons of algorithms across application areas and comparisons between algorithms has often been incomplete, due to the lack of a comprehensive set of test problems. Additionally we believe that a comprehensive set of test problems will lead to a better understanding of the range of structures in sparse matrix problems and thence to better classification and development of algorithms. We have agreed to sponsor and maintain a general library of sparse matrix test problems, available on request to anyone for a nominal fee to cover postal charges. Contributors to the library will, of course, receive a free copy.

192 citations


Journal ArticleDOI
TL;DR: This work approaches the problem of estimating Hessian matrices by differences from a graph theoretic point of view and shows that both direct and indirect approaches have a natural graph coloring interpretation.
Abstract: Large scale optimization problems often require an approximation to the Hessian matrix. If the Hessian matrix is sparse then estimation by differences of gradients is attractive because the number of required differences is usually small compared to the dimension of the problem. The problem of estimating Hessian matrices by diferences can be phrased as follows: Given the sparsity structure of a symmetric matrix $A$, obtain vectors $d_{1},d_{2},\ldots,d_{p}$ such that $Ad_{1},Ad_{2},\ldots,Ad_{p}$ determine $A$ uniquely with $p$ as small as possible. We approach this problem from a graph theoretic point of view and show that both direct and indirect approaches to this problem have a natural graph coloring interpretation. The complexity of the problem is analyzed and efficient practical heuristic procedures are developed. Numerical results illustrate the differences between the various approaches.

154 citations


Journal ArticleDOI
TL;DR: It is verified (by many numerical experiments) that the use of sparse matrix techniques with IR may also result in a reduction of both the computing time and the storage requirements.
Abstract: It is well known that if Gaussian elimination with iterative refinement (IR) is used in the solution of systems of linear algebraic equations $Ax = b$ whose matrices are dense, then the accuracy of the results will usually be greater than the accuracy obtained by the use of Gaussian elimination without iterative refinement (DS). However, both more storage (about $100\% $, because a copy of matrix A is needed) and more computing time (some extra time is needed to perform the iterative process) must be used with IR. Normally, when the matrix is sparse the accuracy of the solution computed by some sparse matrix technique and IR will still be greater. In this paper it is verified (by many numerical experiments) that the use of sparse matrix techniques with IR may also result in a reduction of both the computing time and the storage requirements (this will never happen when IR is applied for dense matrices). Two parameters, a drop-tolerance $T \geqq 0$ and a stability factor $u > 1$, are introduced in the effo...

63 citations


Journal ArticleDOI
TL;DR: The cost-effectiveness of the CRAY-1 is deduced from the data obtained on CDC 7600 equipment, as effective implementation of the matrix multiplication requires efficient performance of data gather and scatter sequences, and a performance of 10 Mflops is observed.

56 citations


Journal ArticleDOI
TL;DR: The a lgor i thms realized by GPSKCA provides the same ma thema t i ca l capabil i t ies as provided by R E D U C E, and removes some implicit restr ict ions on the matr ices t ha t can be reordered.
Abstract: Given the s t ructure of a symmet r i c or s t ructural ly symmet r i c sparse matr ix, G P S K C A a t t e m p t s to find a synlnaetric reorder ing of the mat r ix tha t produces a smal ler bandwid th or profile. References [1], [4], [5], and [6] explain in detail the a lgor i thms realized by GPSKCA. Th i s a lgor i thm provides the same ma thema t i ca l capabil i t ies as provided by R E D U C E , Algor i thms 508, and 509, bu t requires less m e m o r y and t ime and removes some implicit restr ict ions on the matr ices t ha t can be reordered. A descript ion of the differences in the implementa t ion and their effects is given in [7]; G P S K C A and R E D U C E produce the same bandwid th and profile on all p rob lems for which R E D U C E executes successfully. T h e package of subrout ines is evoked by the F O R T R A N s t a t e m e n t

41 citations


01 Sep 1982
TL;DR: This note concerns the computation of the Cholesky factorization of a symmetric and positive definite matrix on a systolic array using the special properties of the matrix to simplify the algorithm and the corresponding architecture given by Kung and Leiserson.
Abstract: This note concerns the computation of the Cholesky factorization of a symmetric and positive definite matrix on a systolic array. We use the special properties of the matrix to simplify the algorithm and the corresponding architecture given by Kung and Leiserson.

27 citations


Journal ArticleDOI
TL;DR: In this article, a self-consistent optimization procedure, free from reliance on comparison with a trial source function, for optimizing the configuration of these chords is described and used to demonstrate that an asymmetric arrangement usually leads to greater reconstruction accuracy than a regular array.
Abstract: Matrix inversion and least squares fitting have been used to recover two dimensional distribution functions from a small number of line integrals taken along chords across them. A self-consistent optimization procedure, free from reliance on comparison with a trial source function, for optimizing the configuration of these chords is described and used to demonstrate that an asymmetric arrangement usually leads to greater reconstruction accuracy than a regular array. Smoothing is incorporated by imposing auxiliary conditions relating to the second derivative ?2f of the source function f, and its effect on reconstruction accuracy and resolution is investigated. These methods are applied to the ten-channel far-infrared interferometer being prepared for use on JET. Electron denisty contour shapes can be identified rather sensitively if the source function contours belong to a predetermined family.

26 citations


Journal ArticleDOI
TL;DR: In this paper, a delay operational matrix is constructed from the Walsh matrix, which is used to solve multi-delay linear dynamic systems, and a simple example is given to compare the actual solution and the solution obtained by the techniques of this paper.
Abstract: A matrix, called the “delay operational matrix”, is constructed from the Walsh matrix. This matrix, together with some matrices obtained from the delay operational matrix after performing right-shift operations, is used to solve multi-delay linear dynamic systems. A simple example is given to compare the actual solution and the solution obtained by the techniques of this paper.

26 citations


Journal ArticleDOI
TL;DR: The two-scalar-potentials method for anisotropic materials is formulated and a computer program and the solution of an example problem are presented and the use of infinite multipolar elements is discussed.
Abstract: The two-scalar potentials idea has been used with success for the computation of static magnetic fields in the presence of nonlinear isotropic magnetic materials by the finite element method. In this communication we formulate the two-scalar-potentials method for anisotropic materials and present a computer program and the solution of an example problem. The use of infinite multipolar elements is also discussed. Several advanced methods and ideas are employed by the program: scalar potentials, rather than vector potentials, giving only one unknown quantity; the finite element method, in which the solution is approximated by a continuous function; the Galerkin method to solve the differential equations; accurate infinite elements, which avoid the introduction of an artificial boundary for unbounded problems; automatic mesh generation, which means that the user can construct a large mesh and represent a complicated geometry with little effort; automatic elimination of nodes outside the iron, which restricts the iterations to the nonlinear anisotropic region with economy of computer time; use of sparse matrix technology, which represents a further economy in computer time when assembling the linear equations and solving them by either Gauss elimination or iterative techniques such as the conjugated gradient method, etc. The combination of these techniques is very convenient.

Journal ArticleDOI
TL;DR: The sparse form of one of the most successful Variable Metric Methods (BFGS) is used to solve power system optimization problems using the sparse factors of the Hessian matrix as opposed to a full inverse Hessian.
Abstract: The sparse form of one of the most successful Variable Metric Methods (BFGS [1, 2]) is used to solve power system optimization problems. The main characteristic of the method is that the sparse factors of the Hessian matrix are used as opposed to a full inverse Hessian. In addition, these factors are updated at every BFGS iteration using a fast and robust sparsity oriented updating algorithm.

Journal ArticleDOI
TL;DR: This paper describes the implementation of Stott's Fast Decoupled Power Flow algorithm on the Floating Point Systems AP-120B array processor, which will solve a 1000-bus problem in less than 0.5 second from a "flat start".
Abstract: The array processor is a comparatively recent innovation in computer architecture which promises large amounts of inexpensive computing power on fairly large problems. In particular, it is able to handle problems involving large, sparse matrix manipulations without serious degradation in performance. One such problem is the AC Power Flow simulation. This paper describes the implementation of Stott's Fast Decoupled Power Flow algorithm on the Floating Point Systems AP-120B array processor. The goal is a power flow which will solve a 1000-bus problem in less than 0.5 second from a "flat start". The parallelism afforded by the functional units of the AP- 120B have a pronounced effect on how the algorithm is implemented. The sparse linear equation solver dictates the hardware options with which the AP-120B should be equipped.

Book ChapterDOI
01 Jan 1982
TL;DR: This paper deals in a first part with the description of large sparse matrices through correspondence analysis and the processing of textual data which leads frequently to such sparse arrays.
Abstract: This paper deals in a first part with the description of large sparse matrices through correspondence analysis. Whereas the usual algorithms may destroy the sparseness and require in-core diagonalization, the procedure presented here works only on a reduced coding of the basic array (put on an external file). The second part is devoted to the processing of textual data which leads frequently to such sparse arrays.

Journal ArticleDOI
TL;DR: This work considers direct methods based on Gaussian elimination for solving sparse sets of linear equations using a “multi-frontal” technique that moves the reals within storage in such a way that all operations are performed on full matrices although the pivotal strategy is minimum degree.


Journal ArticleDOI
TL;DR: An approach for solving large-scale engineering design problems, using sparse matrix methods to simultaneously solve the system describing equations and optmize the design variables is described.

Journal ArticleDOI
TL;DR: This paper presents an efficient technique for reducing the matrix that can be applied to any analysis tables and finds that a very compact form can be obtained and that the access-time from the resulting data structure is very fast.
Abstract: The analysis table based on automation, such as parsing table and lexical analysis table, can be represented by a sparse matrix with invariant entries. This paper presents an efficient technique for reducing the matrix that can be applied to any analysis tables. The advantages are that a very compact form can be obtained and that the access-time from the resulting data structure is very fast. The desirable features are ascertained according to the experimental results for the parsing table and the lexical analysis table of a typical programming language, PASCL.

Journal ArticleDOI
TL;DR: A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.
Abstract: A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.

Journal ArticleDOI
01 Feb 1982
TL;DR: An algorithm has been designed to allow implementation in hardware for 625-line pictures and employs factorisation of the transform matrix into sparse matrices, some of which are sparse factors of a Hadamard matrix.
Abstract: An algorithm is described for realisation of the discrete cosine transform. The algorithm has been designed to allow implementation in hardware for 625-line pictures. It employs factorisation of the transform matrix into sparse matrices, some of which are sparse factors of a Hadamard matrix. Further attention is given to the sharing of hardware components so as to reduce circuit complexity.

Journal ArticleDOI
TL;DR: A generalized methodology for modeling various system components in power system dynamics simulation studies is presented and one of the salient features of the method is its applicability to transient stability, mid-term dynamics simulation and long- term dynamics simulation.
Abstract: A generalized methodology for modeling various system components in power system dynamics simulation studies is presented in this paper. One of the salient features of the method is its applicability to transient stability, mid-term dynamics simulation and long-term dynamics simulation. Also, the application of sparse matrix techniques to the computation of initial conditions is presented.

Proceedings ArticleDOI
TL;DR: In this article, a frequency domain direct efficient analysis and an optimization technique of a large class of lumped-distributed networks containing active elements are presented, where sensitivity and Hessian matrix calculations are performed using truncated Taylor series expansion of two-port parameters of subnetworks.
Abstract: A frequency domain direct efficient analysis and an optimization technique of a large class of lumped-distributed networks containing active elements are presented. Sensitivity and Hessian matrix calculations are performed using truncated Taylor series expansion of two-port parameters of subnetworks. An interactive computer program was developed to demonstrate the application of the method. Examples of network optimization are included to illustrate the powerfuless of the technique.

Journal ArticleDOI
TL;DR: Three algorithms for the solution of the eigenvalue problem for a continuously parameterized family of sparse matrices are presented; a continuousLU (orLR) algorithm, a continuousQR algorithm, and a continuous Hessenberg algorithm.
Abstract: Three algorithms for the solution of the eigenvalue problem for a continuously parameterized family of sparse matrices are presented; a continuousLU (orLR) algorithm, a continuousQR algorithm, and a continuous Hessenberg algorithm Each of the three algorithms may be implemented recursively and the sparsity of the given matrices is preserved throughout the numerical process

Dissertation
01 Nov 1982
TL;DR: An efficient sparse matrix decomposition scheme is developed to solve the large, sparse system of equations that arise during the integration of the DAE system and fully exploits the special structure of the coefficient matrix.
Abstract: This thesis is concerned with the development of numerical software for the simulation of gas transmission networks. This involves developing software for the solution of a large system of stiff differential/algebraic equations (DAE) containing frequent severe disturbances. The disturbances arise due to the varying consumer demands and the operation of network controlling devices such as the compressors. Special strategies are developed to solve the DAE system efficiently using a variable-step integrator. Two sets of strategies are devised; one for the implicit methods such as the semi-implicit Runge-Kutta method, and the other for the linearly implicit Rosenbrock-type method. Four integrators, based on different numerical methods, have been implemented and the performance of each one is compared with the British Gas network analysis program PAN, using a number of large, realistic transmission networks. The results demonstrate that the variable-step integrators are reliable and efficient. An efficient sparse matrix decomposition scheme is developed to solve the large, sparse system of equations that arise during the integration of the DAE system. The decomposition scheme fully exploits the special structure of the coefficient matrix. Lastly, for certain networks, the existing simulation programs fail to compute a feasible solution because of the interactions of the controlling devices in the network. To overcome this difficulty, the problem is formulated as a variational inequality model and solved numerically using an optimization routine from the NAG library (NAGFLIB(l982)). The reliability of the model is illustrated using three test networks.

Journal ArticleDOI
Amir Schoor1
TL;DR: A fast algorithm for the multiplication of two sparse matrices, whose average time complexity is an order of magnitude better than that of standard known algorithms, and which is able to avoid the additional unnecessary index comparisons, thus only requiring O(D,D&NK) time.

Journal ArticleDOI
TL;DR: A sparse matrix method for the numerical solution of nonlinear differential equations arising in modeling of the renal concentrating mechanism using a renumbering of variables and equations such that the resulting Jacobian matrix has a block tridiagonal structure.
Abstract: A sparse matrix method for the numerical solution of nonlinear differential equations arising in modeling of the renal concentrating mechanism is given. The method involves a renumbering of the variables and equations such that the resulting Jacobian matrix has a block tridiagonal structure and the blocks above and below the main diagonal have a known set of complementary nonzero columns. The computer storage for the method is O ( n ). Results of some numerical experiments showing the stability of the method are given.

Book ChapterDOI
01 Jan 1982
TL;DR: This paper surveys software for the solution of sparse sets of linear equations and examines codes which can be used to solve equations arising in the solutions of elliptic partial differential equations.
Abstract: This paper surveys software for the solution of sparse sets of linear equations. In particular we examine codes which can be used to solve equations arising in the solution of elliptic partial differential equations.

Proceedings ArticleDOI
01 Jan 1982
TL;DR: In this paper, a new algorithm is presented for solving simulation problems which employs minimization and a partial decomposition of the matrix resulting from an alternate diagonal ordering, which has been applied to problems in two and three dimensions with good results and examples are given to show the convergence of the method.
Abstract: A new algorithm is presented for solving simulation problems which employs minimization and a partial decomposition of the matrix resulting from an alternate diagonal ordering. The technique has been applied to problems in two and three dimensions with good results and examples are given to show the convergence of the method. Examples include heterogeneous, ill-conditioned problems as well as ones with negative and positive co-efficients. 6 refs.

ReportDOI
01 Aug 1982
TL;DR: An equation-ordering algorithm based on local equation decoupling is proposed to maintain a high flow rate of scalar computations within a floating point pipeline to solve highly-sparse unpatterned systems efficiently via explicit code generation.
Abstract: : To solve directly a sparse, unsymmetric matrix equation Ax = b, an equation-ordering algorithm based on local equation decoupling is proposed to maintain a high flow rate of scalar computations within a floating point pipeline. Software is described to solve highly-sparse unpatterned systems efficiently via explicit code generation. Rates in the range of 15 MFLOPS on the CRAY-1 are achieved. (Author)

ReportDOI
01 Oct 1982
TL;DR: The description and use of a Fortran general sparse solver, modified to run efficiently on a vector processor, and CRAY-1 performance in the analysis of 2D grids is presented.
Abstract: : Description and use of a Fortran general sparse solver, modified to run efficiently on a vector processor, is given. CRAY-1 performance in the analysis of 2D grids is presented. (Author)