scispace - formally typeset
Search or ask a question

Showing papers on "Sparse approximation published in 1987"


Book ChapterDOI
01 Jan 1987
TL;DR: In this paper, two classes of methods for solving the large sparse matrix systems arising from tomographic problems, viz. ART-like methods and projection methods, are discussed, and the former class has been in use for several decades, the latter more recent.
Abstract: In this chapter we will digress on two classes of methods for solving the large sparse matrix systems arising from tomographic problems, viz. ART-like methods and projection methods.1 The former class has been in use for several decades, the latter is more recent.

236 citations


Journal ArticleDOI
01 Dec 1987
TL;DR: Progress in the use of direct methods for solving very large sparse symmetric positive definite systems of linear equations on vector supercomputers is summarized.
Abstract: This paper summarizes progress in the use of direct methods for solving very large sparse symmetric positive definite systems of linear equations on vector supercomputers. Sparse di rect solvers based on the multifrontal method or the general sparse method now outperform band or envelope solvers on vector supercomputers such as the CRAY X-MP. This departure from conventional wisdom is due to several advances. The hardware gather/scatter feature or indirect address feature of some recent vector super computers permits vectorization of the general sparse factorization. Other advances are algo rithmic. The new multiple minimum degree algo rithm calculates a powerful ordering much faster than its predecessors. Exploiting the supernode structure of the factored matrix provides vectori zation over nested loops, giving greater speed in the factorization module for the multifrontal and general sparse methods. Out-of-core versions of both methods are now available. Numerical re sults on the CRAY X-MP for several structural engineering problems demonstrate the impact of these improvements.

149 citations


Journal ArticleDOI
TL;DR: Several possible computational strategies for computing a sparse basis for the null space of a sparse underdetermined matrix are described, both combinatorial and noncombinatorial in nature.
Abstract: We present algorithms for computing a sparse basis for the null space of a sparse underdetermined matrix. We describe several possible computational strategies, both combinatorial and noncombinatorial in nature, and we compare their effectiveness for several test problems.

95 citations


01 Oct 1987
TL;DR: A quadratic programming method based on the classical Schur complement is described, whose key feature is that much of the linear algebraic work associated with an entire sequence of iterations involves a fixed sparse factorization.
Abstract: : In applying active-set methods to sparse quadratic programs, it is desirable to utilize existing sparse-matrix techniques. The authors describe a quadratic programming method based on the classical Schur complement. Its key feature is that much of the linear algebraic work associated wtih an entire sequence of iterations involves a fixed sparse factorization. Updates are performed at every iteration to the factorization of a smaller matrix, which may be treated as dense or sparse. The use of a fixed sparse factorization allows an off-the shelf sparse equation solver to be used repeatedly. This feature is ideally suited to problems with structure that can be exploited by a specialized factorization. Moreover, improvements in efficiency derived from exploiting new parallel and vector computer architectures are immediately applicable. An obvious application of the method is in sequential quadratic programming methods for nonlinearly constrained optimization, which require solution of a sequence of closely related quadratic programming subproblems. Some ways in which the known relationship between successive problems can be exploited are discussed. Keywords: Supercomputers; Variables; Computations.

69 citations


Proceedings Article
01 Dec 1987

51 citations


Journal ArticleDOI
TL;DR: Primal-dual algorithms for the min-sum linear assignment problem are summarized and a new algorithm is proposed for the complete case, which transforms the complete cost matrix into a sparse one, solves the sparse problem and checks the optimality of the solution found with respect to the original problem.

38 citations


Journal ArticleDOI
Ameet K Dave1, Iain S. Duff
01 Jul 1987
TL;DR: This work has been testing kernels and codes on a CRAY-2 prior to the delivery of a machine to Harwell in 1987 and reports some results on the solution of sparse equations which indicate that high efficiency can be obtained.
Abstract: The CRAY-2 is sometimes viewed as a CRAY-1 with a faster cycle time. We show that this viewpoint is not completely appropriate by describing some of the important architectural differences between the machines and indicating how they can be used to facilitate numerical calculations. We have been testing kernels and codes on a CRAY-2 prior to the delivery of a machine to Harwell in 1987 and report some results on the solution of sparse equations which indicate that high efficiency can be obtained. We give details of one of the techniques for attaining this performance. We also comment on the use of parallelism in the solution of sparse linear equations on the CRAY-2.

32 citations




Book ChapterDOI
01 Jan 1987
TL;DR: Three main approaches that are used in the direct solution of sparse unsymmetric linear equations and how they perform on computers with vector or parallel architecture are discussed.
Abstract: We discuss three main approaches that are used in the direct solution of sparse unsymmetric linear equations and indicate how they perform on computers with vector or parallel architecture. The principal methods which we consider are general solution schemes, frontal methods, and multifrontal techniques. In each case, we illustrate the approach by reference to a package in the Harwell Subroutine Library. We consider the implementation of the various approaches on machines with vector architecture (like the CRAY-1) and on parallel architectures, both with shared memory and with local memory and message passing.

17 citations


DOI
01 Jan 1987
TL;DR: This work investigates and develops a fast, scalable implementation of the Lanczos algorithm over GF (p) that can solve large, sparse linear systems in parallel using workstation clusters and dedicated Beowulf clusters.
Abstract: The second stage of processing in the Number Field Sieve (NFS) and Function Field Sieve (FFS) algorithms when applied to discrete logarithm problems requires the solution of a large, sparse linear system de ned over a nite eld. This operation, although well studied in theory, presents a practical bottleneck that can limit the size of problem either algorithm can tackle. In order to partially bridge this gap between theory and practise, we investigate and develop a fast, scalable implementation of the Lanczos algorithm over GF (p) that can solve such systems in parallel using workstation clusters and dedicated Beowulf clusters.

Journal ArticleDOI
TL;DR: A Pascal program that implements the Gaussian elimination strategy for the solution of (very, sparse linear systems) is presented.

Journal ArticleDOI
TL;DR: A Quasi-Newton type method, which applies to large and sparse nonlinear systems of equations, and uses the Q-R factorization of the approximate Jacobians, which belongs to a more general class of algorithms for which a local convergence theorem is proved.
Abstract: In this paper we present a Quasi-Newton type method, which applies to large and sparse nonlinear systems of equations, and uses the Q-R factorization of the approximate Jacobians. This method belongs to a more general class of algorithms for which we prove a local convergence theorem. Some numerical experiments seem to confirm that the new algorithm is reliable.

Journal ArticleDOI
TL;DR: The spatial and the storage cost of two programs that solve (very) sparse linear systems are compared and it is shown that the former is significantly cheaper than the latter.

Proceedings Article
01 Mar 1987
TL;DR: The idea of grouping the non-zero elements of a sparse matrix into few strips that are almost parallel is applied to the design of a systolic accelerator for sparse matrix operations, and this accelerator is integrated into a complete syStolic system for the solution of large sparse linear systems of equations.
Abstract: : The idea of grouping the non-zero elements of a sparse matrix into few strips that are almost parallel is applied to the design of a systolic accelerator for sparse matrix operations. This accelerator is, then, integrated into a complete systolic system for the solution of large sparse linear systems of equations. The design demonstrates that the application of systolic arrays is not limited to regular computations, and that computationally irregular problems may be solved on systolic networks if local storage is provided in each systolic cell for buffering the irregularity in the data movement and for absorbing the irregularity in the computation.

Journal ArticleDOI
TL;DR: A new method for solving Nonlinear Least Squares problems when the Jacobian matrix of the system is large and sparse using a preconditioned Conjugate Gradient algorithm and a two-dimensional trust region scheme is introduced.
Abstract: We introduce a new method for solving Nonlinear Least Squares problems when the Jacobian matrix of the system is large and sparse.

Journal ArticleDOI
TL;DR: It is shown experimentally that an equivalent reordering, if appropriately chosen, can reduce the CPU time and elapsed time for sparse factorization.
Abstract: The impact of reordering on the Cholesky factorization of a sparse matrix in a paging environment is examined. We show experimentally that an equivalent reordering, if appropriately chosen, can reduce the CPU time and elapsed time for sparse factorization.

Proceedings Article
01 Jan 1987
TL;DR: With a new ''sparse-align'' vector unit, PSolve is vectorizable and runs faster than any other algorithm for all the selected test matrices, on the Alliant FX/8 multiprocessor.
Abstract: PSolve solves a sparse system of linear equations on a shared-memory parallel processor. Each autonomous process reduces a pair of matrix rows, uses pairwise pivoting for numerical stability, and synchronizes with only a few others at a time. For most test matrices, on the Alliant FX/8 multiprocessor, PSolve is faster than Gaussian Elimination (does not exploit sparsity) and the Yale Sparse Matrix Package (does not exploit parallelism). Although Gaussian Elimination runs in vector mode, PSolve cannot use currently available vector hardware because of its compact matrix storage scheme. However, with a new ''sparse-align'' vector unit, PSolve is vectorizable and runs faster than any other algorithm for all the selected test matrices. 11 refs., 10 figs.

Journal ArticleDOI
TL;DR: The utility of a pattern analysis procedure called sparse decomposition, which involves sequentially ``peeling'' sparse subsets of patterns from a pattern set, is defined and a statistic P is derived and shown to be powerful in detecting clustering tendency for data in reasonably compact sampling windows.
Abstract: We define and verify the utility of a pattern analysis procedure called sparse decomposition. This technique involves sequentially ``peeling'' sparse subsets of patterns from a pattern set, where sparse subsets are sets of patterns which possess a certain degree of regularity or compactness as measured by a compactness measure c. If this is repeated until all patterns are deleted, then the sequence of decomposition ``layers'' derived by this procedure provides a wealth of information from which inferences about the original pattern set may be made. A statistic P is derived from this information and is shown to be powerful in detecting clustering tendency for data in reasonably compact sampling windows. The test is applied to both synthetic and real data.

Book ChapterDOI
01 Jan 1987
TL;DR: The paper aims at showing some of the techniques used in order to analyze “sparse images”, and results of their application on simulated and real data are given.
Abstract: Images where the “on” pixels are spatially spread are named “sparse images”. Their analysis cannot be performed usually in a standard fashion and requires new methodologies. The paper aims at showing some of the techniques used in order to analyze “sparse images”. Results of their application on simulated and real data are also given.

Journal ArticleDOI
TL;DR: The implementation of the Extended to Limit sparse LU factorization solution methods is given and two FORTRAN subroutines for the approximate (or exact) factorization and the solution of resulting equations are supplied.
Abstract: The implementation of the Extended to Limit sparse LU factorization solution methods1 is given. Two FORTRAN subroutines for the approximate (or exact) factorization and the solution of resulting equations are supplied. The amount of fill-in terms can be controlled by the user through parameters R1, R2, the limiting case being when the matrix is factorized exactly.



Journal ArticleDOI
TL;DR: A two ordering algorithm is presented, to optimize number of operations and memory requirements, by including the lenght of paths in the sparse vector methods in the matrix and vector sparse methods.