scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 1986"


Journal ArticleDOI
TL;DR: The key idea is a new triangular solver that uses depth-first search and topological ordering to take advantage of sparsity in the right-hand side to factor sparse matrices by Gaussian elimination with partial privoting in time proportional to the number of arithmetic operations.
Abstract: Existing sparse partial pivoting algorithms can spend asymptomatically more time manipulating data structures than doing arithmetic, although they are tuned to be efficient on many large problems. We present an algorithm to factor sparse matrices by Gaussian elimination with partial privoting in time proportional to the number of arithmetic operations. Implementing this algorithm requires only simple data structures and gives a code that is competitive with, and often faster than, existing sparse codes. The key idea is a new triangular solver that uses depth-first search and topological ordering to take advantage of sparsity in the right-hand side.

188 citations



Journal ArticleDOI
TL;DR: It is shown that for any sparse symmetric matrix, and assuming enough processors are available, the full inverse of the matrix can be calculated in the same amount of time as the sparse inverse.
Abstract: This paper presents a parallel algorithm for obtaining the inverse of a large, nonsingular symmetric matrix A of dimension nxn. The inversion method proposed is based on the triangular factors of A. The task of obtaining the "sparse inverse' of A is represented by a directed acyclic graph. The relation between the triangulation graph and the sparse inversion graph is given. The algorithm and the graph for the full inversion of A is also given. It is shown that for any sparse symmetric matrix, and assuming enough processors are available, the full inverse of the matrix can be calculated in the same amount of time as the sparse inverse. For ideally sparse matrices (such as tridiagonal matrices) the order of computation required in both cases is of order log2n. For full matrices the order of computation is n log2n. Claims are substantiated using test data from several power systems.

50 citations


Journal ArticleDOI
TL;DR: Algorithms for performing sparse Cholesky factorization and sparse triangular solutions on a shared-memory multiprocessor computer, along with some numerical experiments demonstrating their performance on a Sequent Balance 8000 system are presented.
Abstract: Algorithms and software for solving sparse symmetric positive definite systems on serial computers have reached a high state of development. In this paper, we present algorithms for performing sparse Cholesky factorization and sparse triangular solutions on a shared-memory multiprocessor computer, along with some numerical experiments demonstrating their performance on a Sequent Balance 8000 system.

46 citations


Journal ArticleDOI
TL;DR: This paper introduces general row merging schemes for the $QR$ decomposition of sparse matrices by Givens rotations and presents an algorithm to determine automatically a sequence of submatrix rotations appropriate for sparse decomposition.
Abstract: This paper introduces general row merging schemes for the $QR$ decomposition of sparse matrices by Givens rotations. They can be viewed as a generalization of row rotations to submatrix rotations (or merging) in the recent method by George and Heath [12]. Based on the column ordering and the structure of the given sparse matrix, we present an algorithm to determine automatically a sequence of submatrix rotations appropriate for sparse decomposition. It is shown that the actual numerical computation can be organized as a sequence of reductions of two upper trapezoidal full submatrices into another upper trapezoidal full matrix. Experimental results are provided to compare the practical performance of the proposed method and the George-Heath scheme. Significant reduction in arithmetic operations and factorization time is achieved in exchange for a very modest increase in working storage. The interpretation of general row merging as a special variable row pivoting method is also presented.

43 citations



Book ChapterDOI
17 Sep 1986
TL;DR: The solution of large sparse systems using Gaussian elimination on both local and shared memory parallel computers is discussed.
Abstract: We discuss the solution of large sparse systems using Gaussian elimination on both local and shared memory parallel computers.

8 citations


Journal ArticleDOI
TL;DR: A new implementation of the shortest augmenting path approach for solving sparse assignment problems and computational experience documenting its efficiency is described.
Abstract: We describe a new implementation of the shortest augmenting path approach for solving sparse assignment problems and report computational experience documenting its efficiency.

7 citations


Proceedings Article
01 Jan 1986
TL;DR: In this paper, the authors show that hardware gather/scatter allows general sparse elimination algorithms to outperform algorithms based on a band, envelope, or block structure on vector computers.
Abstract: Recent vector supercomputers provide vector memory access to “randomly” indexed vectors, whereas early vector supercomputers required contiguously or regularly indexed vectors This additional capability, known as “hardware gather/scatter,” can be used to great effect in general sparse Gaussian elimination In this note we present some examples that show the impact of this change in hardware on the choice of algorithms for sparse Gaussian elimination Common folk wisdom holds that general sparse Gaussian elimination algorithms do not perform well on vector computers Our numerical results demonstrate that hardware gather/scatter allows general sparse elimination algorithms to outperform algorithms based on a band, envelope, or block structure on such computers

4 citations


Book ChapterDOI
17 Sep 1986
TL;DR: In analyzing electronic circuits, it is usually necessary to solve a sparse coefficient matrix comprised of simultaneous linear equations, so a dedicated parallel machine called (sm)2-II (’the Sparse Matrix Solving Machine’ version II) is developed.
Abstract: In analyzing electronic circuits, it is usually necessary to solve a sparse coefficient matrix comprised of simultaneous linear equations. In order to treat these problems effectively, we have developed a dedicated parallel machine called (sm)2-II (’the Sparse Matrix Solving Machine’ version II).

3 citations



Proceedings ArticleDOI
D. Shu1, C. Li, Y. Sun
01 Apr 1986
TL;DR: This paper attempts to perform as much classification analysis as possible during the off-line learning process, while only a very small subset of the range data needs to be processed for the on-line recognition.
Abstract: The problem of speedily and reliably interpreting range data for industrial object recognition is becoming critically important in the field of robotics and computer vision. This paper presents one approach to attack this problem. It attempts to perform as much classification analysis as possible during the off-line learning process, while only a very small subset of the range data needs to be processed for the on-line recognition. An adaptive matched filter is developed to extract an object's surface feature vectors using range measurement and surface normals. A sparse representation for each object category j is constructed for classification purpose in the form of a Rj-table of selected feature vectors. The approach, based on the concept of the generalized Hough Transform, classifies an object by examining the maximum votes which it receives from various Ri-tables. An optimal selection rule is established for minimizing the misclassification probability. An experiment with a set of simulated range images of nine categories demonstrated the success of the proposed methodology.



Journal ArticleDOI
TL;DR: This paper gives a brief overview of what is intrinsically a variation of Gaussian elimination, but a variation that seems well-suited for sparse systems, especially randomly sparse systems that can be parallelized at least as well as the usualGaussian elimination and readily vectorized as well.

Dissertation
01 Apr 1986
TL;DR: This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/15992.
Abstract: This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/15992


Proceedings ArticleDOI
08 Jun 1986
TL;DR: In this paper, some algorithms are compared to solve the sparse matrix: the Gaussian elimina$ion algorithm, Cholesky decomposition algorithm, several versions of conjugate gradient methods, and the effect of the nonzero element positions to the efficiency is studied.
Abstract: In applying the method of moments to solve the EM scattering problems, it is necessary to solve a large matrix when the dimension of the scatterer is larger than several wavelengths. Tremendous amount of computer CPU time will be spent on solving the matrix equation. When only the far field properties such as scattering cross section is of interest, we can use the sparse matrix technique to reduce the amount of compucation. In this paper, some algorithms are compared to solve the sparse matrix. The Gaussian elimina$ion algorithm, Cholesky decomposition algorithm, several versions of conjugate gradient methods are used. The number of multiplications and divisions(fl0ps) are counted for comparing the efficiency of these algorithms. The effect of the nonzero element positions to the efficiency is also studied by defining the clustering index.

Book ChapterDOI
B. Hua, Z. Mao, X. Ye, Y. Zhang, G. Luo, H. Xiong 
01 Jan 1986
TL;DR: A large system BEAP-II of boundary element analysis developed by the authors has been presented and can solve the stress analysis problems of two and three dimensional continuum and the stress strength factor of crack.
Abstract: In this paper a large system BEAP-II of boundary element analysis developed by the authors has been presented. This system can solve the stress analysis problems of two and three dimensional continuum and the stress strength factor of crack. On numerical solution due to having used the subregion and the unsymmetric sparse algorithm5 to boundary element method, then one or two matrix sections are reserved in internal storage. Therefore this program can solve large engineering problems. BEAP-II possesses unified pre and post processors, solver and element library. Some applied examples in engineering can be found in author’s other papers6,7.