scispace - formally typeset
Search or ask a question

Showing papers on "Sparse matrix published in 1981"


Journal ArticleDOI
TL;DR: A computer program for obtaining a permutation of a general sparse matrix that places a maxunum number of nonzero elements on its dmgonal is described and some imtml attempts at implementmg an algomthm with supermr asymptotic complexity are described.
Abstract: A computer program for obtaining a permutation of a general sparse matrix that places a maxunum number of nonzero elements on its dmgonal is described. The history of this problem and the mare motivatmn for developing the algorithm are briefly chscussed Comments are made on the use of cheap heunstms, and the complexity of the algorithm is examined both in terms of its worst case bound and its performance on typmal examples. Finally, some imtml attempts at implementmg an algomthm with supermr asymptotic complexity are described.

185 citations


Journal ArticleDOI
TL;DR: The symbolic matrix method which gives compact representation and efficient determination of expressions for the Hamiltonian and other matrix operators arising in configuration interaction (CI) calculations is presented and provides a basis for a general direct CI method which will be presented in a forthcoming paper.
Abstract: The symbolic matrix method which gives compact representation and efficient determination of expressions for the Hamiltonian and other matrix operators arising in configuration interaction (CI) calculations is presented. With this method, the computing and storage requirements for matrix expressions become insignificant compared to the total requirements of a CI calculation. The efficiency is achieved by taking advantage of analogies between expressions of different matrix elements to reduce drastically the number of expressions determined explicitly. The symbolic matrix method is completely general, unrestricted by the type of operators considered, or by the choice of n‐particle basis. It can take full advantage of any point group symmetry, and the ordered interacting spaces to reduce the dimension of the n‐particle basis. In addition, the method provides a basis for a general direct CI method which will be presented in a forthcoming paper. A comparison with the graphical unitary group approach is provided.

136 citations


Journal ArticleDOI
TL;DR: Methods are described for the solution of certain sparse linear systems with a non-symmetric matrix that arises from discretisation of second order partial differential equations with first order derivative terms.

121 citations


Journal ArticleDOI
TL;DR: Algorithms and data structures that may be used in the efficient implementation of symmetric Gaussian elimination for sparse systems of linear equations with positive definite coefficient matrices are presented.
Abstract: In this paper we present algorithms and data structures that may be used in the efficient implementation of symmetric Gaussian elimination for sparse systems of linear equations with positive definite coefficient matrices. The techniques described here serve as the basis for the symmetric codes in the Yale Sparse Matrix Package.

85 citations


01 Jan 1981
TL;DR: The rounding errors of the computational Lanczos algorithm are examined in order to account for the differences between the ideal and the machine-operator quantities and a formal error analysis relates the errors to properties of the matrix tridiagonalization problem solved by the algorithm.
Abstract: Two algorithms of use in sparse matrix computation are studied. The rounding errors of the computational Lanczos algorithm are examined in order to account for the differences between the ideal and the machine-operator quantities. The observed behavior of these errors is explained by means of a formal error analysis which relates the errors to properties of the matrix tridiagonalization problem solved by the algorithm. An investigation of the orthogonal polynomials associated with the algorithm partially explains the observed phenomena. For Richardson's method of solving systems of linear equations, the associated uniform approximation problem is taken as a particular case of a more general problem. The solutions to the latter are characterized. A version of the algorithm of Remez is shown to solve these problems.

39 citations


Journal ArticleDOI
M. Ueno1
TL;DR: In this article, a general design formulation for a Butler matrix (B matrix) is described, where the B matrix design problem is used to determine phase shift location and value in a matrix, when the number of beam (elements of array) M = 2−N and the scattering matrix for the hybrid couplers are specified.
Abstract: A systematic general design formulation for a Butler matrix ( B matrix) is described. The B matrix design problem discussed is used to determine phase shift location and value in a matrix, when the number of beam (elements of array) M = 2^{N} and the scattering matrix for the hybrid couplers are specified. The design formulation presented is based on the fact that a B matrix design procedure and an FFT algorithm are equivalent in fundamental concepts. It is shown that the B matrix design procedure can be systematically formulated by the FFT algorithm modifications, which preserves the topological properties of the original signal flow diagram. A simple design formula has been established by this formulation.

30 citations


Book ChapterDOI
01 Jan 1981
TL;DR: The variable band, active column and the frontal methods for the solution of linear algebraic equations are compared and the methods for nonlinear problems, dynamics and substructure analyses are compared.
Abstract: The variable band, active column and the frontal methods for the solution of linear algebraic equations are compared. Areas considered in the comparison of the two methods are: 1. The number of numerical operations required for the triangular decomposition and resolution for different load vectors. 2. Logical operations and data transfer requirements associated with the formation of equations, decomposition and resolution. 3. Use of the methods for nonlinear problems, dynamics and substructure analyses. 4. Ease of computer implementation on mini-computers, micro-computers and vectorized computers.

23 citations


Journal ArticleDOI
TL;DR: It is shown that under certain conditions all previously published methods have the potential of generating zero-diagonal pivots, regardless of element values in the network.
Abstract: The modified nodal approach has been widely used for formulating network equations. Although this approach is quite general, zero-diagonal elements may exist in the network matrix. When sparse matrix techniques with diagonal pivoting are used to solve these equations, extreme care should be taken so as not to choose a zero-valued pivot. In this paper it is shown that under certain conditions all previously published methods have the potential of generating zero-diagonal pivots, regardless of element values in the network. A simple partitioning and ordering strategy is then presented which guarantees that no zero-valued pivots will be generated for any choice of diagonal pivots. The method has been implemented and well tested and various illustrative examples are included.

21 citations


Proceedings ArticleDOI
05 Aug 1981
TL;DR: The introduction of the new Factoring Recursive Minor Expansion algorithm with Memo, FDSLEM, with important properties has important properties which make the implementation of an algorithm which can generate the approximate solution of a perturbed system of equations relatively straight forward.
Abstract: Symbolic solutions of large sparse systems of linear equations, such as those encountered in several engineering disciplines (electrical engineering, biology, chemical engineering etc.) are often very lengthy, and received for this reason only occasional attention. This places the designer of a new and probably more successful symbolic solution method for the hard problem to find a representation which is suitable in the corresponding engineering areas, while still being neat and compact. It is believed that this problem has been solved to a great deal with the introduction of the new Factoring Recursive Minor Expansion algorithm with Memo, FDSLEM, presented in this paper.The FDSLEM algorithm has important properties which make the implementation of an algorithm which can generate the approximate solution of a perturbed system of equations relatively straight forward.The algorithms given can operate on arbitrary sparse matrices, but one obtains optimal profit of the properties of the algorithm if the matrices have a certain fundamental form, as is illustrated in the paper.

20 citations


12 Jun 1981
TL;DR: Both dynamic analysis and design sensitivity analysis and optimization are shown to be well-suited to application of efficient sparse matrix computational methods.
Abstract: : A method for formulating and automatically integrating the equations of motion of quite general constrained dynamic systems is presented. Design sensitivity analysis is also carried out using a state space method that has been used extensively in structural design optimization. Both dynamic analysis and design sensitivity analysis and optimization are shown to be well-suited to application of efficient sparse matrix computational methods. Numerical integration is carried out using a stiff numerical integration method that treats mixed systems of differential and algebraic equations. A computer code that implements the method of planar systems is outlined and a numerical example is treated. The dynamic response of a classical slider-crank is analyzed and its design is optimized.

18 citations


Journal ArticleDOI
TL;DR: The LINPACK package of linear equation solving software provides a reliable and inexpensive algorithm for estimating the condition number of a dense matrix, but the direct generalization to banded or sparse matrices is reliable, but not necessarily inexpensive.
Abstract: The LINPACK package of linear equation solving software provides a reliable and inexpensive algorithm for estimating the condition number of a dense matrix. The direct generalization to banded or sparse matrices is reliable, but not necessarily inexpensive. The simple modification described in this note can bring the cost of the algorithm back to a reasonable level.

Journal ArticleDOI
TL;DR: An efficient algorithm called PRIM is proposed for transposing an arbitraxy R ×C matrix which is too large to be stored in its entirety in working memory and which instead is stored by rows on disk.
Abstract: An efficient algorithm called PRIM is proposed for transposing an arbitraxy R ×C matrix which is too large to be stored in its entirety in working memory and which instead is stored by rows on disk. PRIM facilitates the execution of numerical matrix algorithms which operate both by rows and by columns.

Proceedings ArticleDOI
Earl R. Barnes1
01 Dec 1981
TL;DR: A heuristic algorithm for partitioning the nodes of a graph into a given number of subsets in such a way that the number of edges connecting the various subsets is a minimum.
Abstract: We present a heuristic algorithm for partitioning the nodes of a graph into a given number of subsets in such a way that the number of edges connecting the various subsets is a minimum. The sizes of the subsets must be specified in advance.

Journal ArticleDOI
TL;DR: A set of algorithms for solution updating due to large changes in system parameters is derived by partitioning the changes in the values of the elements from the remainder of the system and then applying tearing procedures to solve a partitioned system of equations.
Abstract: A set of algorithms for solution updating due to large changes in system parameters is derived. The algorithms are derived by partitioning the changes in the values of the elements from the remainder of the system and then applying tearing procedures to solve tthe partitioned system of equations. The algorithms obtained include new ones as well as most of the previously proposed ones for solving this problem. Sparse matrix solution techniques are used whenever possible, and the computational requirements of the algorithms are assessed and compared.

Journal ArticleDOI
TL;DR: It is shown that the usual inner product type algorithm is by far the best algorithm for simple matrix multiplication or matrix chain product in terms of minimal basic term growth and minimal error complexities.
Abstract: The error complexity analysis of three algorithms for matrix multiplication and matrix chain product has been given. It is shown that the usual inner product type algorithm is by far the best algorithm for simple matrix multiplication or matrix chain product in terms of minimal basic term growth and minimal error complexities, the latter being independent of the order of pairwise matrix multiplications. Winograd's algorithm is comparable to the usual one, although in matrix chain product the error and data complexities are very sensitive to the order of pairwise matrix multiplication. Strassen's algorithm is not very attractive numerically for having the largest upper bound for both the maximum error complexity and the number of basic terms generated.

Journal ArticleDOI
TL;DR: By an adequate discretization procedure the matrix blocks of the block-diagonal matrix do not entirely overlap, this feature can be used to advantage when solving the eigenvalue problem by sparse matrix techniques.


Journal ArticleDOI
TL;DR: This paper illustrates that the approximation to the Jacobian matrix may be automatically adjusted to reduce the degree of coupling, and partition the predictor-corrector iteration into implicit and explicit segments, which further reduces the magnitude of the matrix operation without detracting from the accuracy of the integration.
Abstract: It is well known that large sets of coupled ordinary differential equations are difficult to integrate efficiently. The “stiffness” of the equations, arising from the wide range of embedded time constants, necessitates that implicit integration techniques be used. These normally require some form of algebraic equation solution involving the Jacobian matrix, a necessary operation which becomes increasingly expensive with the size of the equation set.Efforts to partition large equation sets into linear and nonlinear or stiff and nonstiff subsets have proved fruitful, but are difficult to implement in a general manner, as such a division cannot always be found.In the integration algorithm due to Gear, the implicit requirement is handled in an approximate manner by using a predictor-corrector iteration, again involving the Jacobian. The expediency of this operation is greatly enhanced by using sparse matrix techniques, but the operation remains relatively expensive for large sets. This paper illustrates that the approximation to the Jacobian matrix may be automatically adjusted to reduce the degree of coupling, and partition the predictor-corrector iteration into implicit and explicit segments. This further reduces the magnitude of the matrix operation without detracting from the accuracy of the integration.

Journal ArticleDOI
TL;DR: A new algorithm has been developed for triangularisation of the banded matrix that is computationally economical and advantageous as compared to the existing known algorithms.

Journal ArticleDOI
TL;DR: The development of a dynamic algorithm for improving the estimates of the involved parameters is presented and it is seen that the attained rate of convergence is approximately O(h 1/2) and is better than the algorithm using estimated parameters in certain cases.
Abstract: The Preconditioned Simultaneous Displacement (PSD) iterative method is considered for the solution of symmetric, sparse matrix problems, The development of a dynamic algorithm for improving the estimates of the involved parameters is presented, These estimates are then used to accelerate the PSD method by employing semi-iterative techniques, The algorithm determines adaptively a sequence of parameters while the iteration is in progress without requiring preliminary eigenvalue estimates (only trivial input parameters are required), The performance of the algorithm is tested on a number of generalised Dirichlet problems, It is seen that the attained rate of convergence is approximately O(h 1/2) and is better than the algorithm using estimated parameters in certain cases.


Journal ArticleDOI
TL;DR: A new general method is derived for reduction of linear systems of equations with symbolic polynomial coefficients suitable for using sparse matrix techniques and implementation in a program for symbolic analysis of circuits offers significant advantages over the existing reduction method.
Abstract: A new general method is derived for reduction of linear systems of equations with symbolic polynomial coefficients suitable for using sparse matrix techniques. Its implementation in a program for symbolic analysis of circuits offers significant advantages over the existing reduction method.

01 Jan 1981
TL;DR: The final author version and the galley proof are versions of the publication after peer review and the final published version features the final layout of the paper including the volume, issue and page numbers.
Abstract: • A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.

01 Jan 1981
TL;DR: New algorithms and special purpose sequential processor architectures for the computation of a class of one-, two- and multi-dimensional unitary transforms are presented and it is shown that the two-dimensional FFT processor architecture proposed in this work requires less hardware than the conventional implementations.
Abstract: This work presents the development of new algorithms and special purpose sequential processor architectures for the computation of a class of one-, two- and multi-dimensional unitary transforms In particular, a technique is presented to factorize the transformation matrices of a class of multi-dimensional unitary transforms, having separable kernels, into products of sparse matrices These sparse matrices consist of Kronecker products of factors of the one-dimensional transformation matrix Such factorizations result in fast algorithms for the computation of a variety of multi-dimensional unitary transforms including Fourier, Walsh-Hadamard and generalized Walsh transforms It is shown that the u-dimensional Fourier and generalized Walsh transforms can be implemented with a u-dimensional radix-r butterfly operation requiring considerably fewer complex multiplications than the conventional implementation using a one-dimensional radix-r butterfly operation Residue number principles and techniques are applied to develop novel special purpose sequential processor architectures for the computation of one-dimensional discrete Fourier and Walsh-Hadamard transforms and convolutions in real-time The residue number system (RNS) based implementations yield a significant improvement in processing speed over the conventional realizations using the binary number system As an illustration of the factorization techniques developed in this work, novel sequential architectures of RNS-based fast Fourier, Walsh-Hadamard and generalized Walsh transform processors for real-time processing of two-dimensional signals are presented These sequential processor architectures are capable of processing large bandwidth (> 5 MHz) input sequences The application of the proposed FFT processors for the real-time computation of two-dimensional convolutions is also investigated A special memory structure to support two-dimensional convolution operations is presented and it is shown that the two-dimensional FFT processor architecture proposed in this work requires less hardware than the conventional implementations The FFT algorithms and processor architectures are verified by computer simulation

Journal ArticleDOI
TL;DR: This paper describes a form of purposeful data perturbation in a linear programming model which pertains to uncertainties in the magnitudes of the matrix coefficients, and several theorems are presented that measure bias under a variety of assumptions.
Abstract: This paper describes a form of purposeful data perturbation in a linear programming model which pertains to uncertainties in the magnitudes of the matrix coefficients. A problem in value pool construction is described first, then a resolution based on a new concept, “covering lattices.” Computer representations of real values, limited by finite precision, is an example of a covering lattice. After presenting the strategy and tactical variations, the effects of resident distortion are analyzed. Several theorems are presented that measure bias under a variety of assumptions. An appendix is included that contains mathematical proofs.

Journal ArticleDOI
TL;DR: A new method is proposed for generating a sequence of sparse approximation to the second-derivative matrix that does not have to solve an associated linear system of equations at each step.

01 Mar 1981
TL;DR: Reports of three related studies germane to structural optimization are provided, which describe virtual memory simulator suitable for management of large quantities of numerical data such as required for sparse matrix manipulation and two sparse matrix processors suitable for the large equation systems arising in structural analysis.
Abstract: : Reports of three related studies germane to structural optimization are provided. The first describes virtual memory simulator suitable for management of large quantities of numerical data such as required for sparse matrix manipulation. The second report describes two sparse matrix processors suitable for the large equation systems arising in structural analysis and provides comparative results. The last report describes a study of two optimization algorithms in the context of structural optimization. A number of test results for parameter studies and a general comparison of the two algorithms are given.

01 Jan 1981
TL;DR: The results show that the combination of buffer reservation and processor capacity allocation gives strictly nondecreasing network output as a function of increasing network input load, i.e., undesirable store and forward congestion effects are eliminated.
Abstract: Absrmcr-The purpose of this study is twofold. First the study illustrates the utility of applying sparse matrix methods to packet network models. Secondly, these methods are used to give new results about the control of store and forward congestion in packet networks. Store and forward congestion (node to node blocking) reduces the effective traffic carrying capacity of the network by unnecessarily idling network resources. This study shows how store and forward congestion can be controlled’by a Combination of buffer reservation and processor capacity allocation. The scheme presented is analyzed using a Markovian,state-space model of two coupled packet switches. The model contains more detail than previous analytic models. It is therefore solved using numerical sparse matrix methods. The results show that the combination of buffer reservation and processor capacity allocation ,gives strictly nondecreasing network output as a function of increasing network input load, i.e., undesirable store and forward congestion effects are eliminated.


Journal ArticleDOI
TL;DR: For the matricesA mentioned in the headline the authors determine the limit points up to which there is possible a real factorization of the formA=QQT.
Abstract: For the matricesA mentioned in the headline we determine the limit points up to which there is possible a real factorization of the formA=QQ T . HereQ=(q ij ) is a circulant matrix, where from the elementsq ij andq ji withi≠j always one element is vanishing.