scispace - formally typeset
Open AccessJournal ArticleDOI

Recursive approach in sparse matrix LU factorization

Reads0
Chats0
TLDR
In this paper, a recursive method for the LU factorization of sparse matrices is described, and performance results show that the recursive approach may perform comparable to leading software packages for sparse matrix factorization in terms of execution time, memory usage and error estimates of the solution.
Abstract
This paper describes a recursive method for the LU factorization of sparse matrices. The recursive formulation of common linear algebra codes has been proven very successful in dense matrix computations. An extension of the recursive technique for sparse matrices is presented. Performance results given here show that the recursive approach may perform comparable to leading software packages for sparse matrix factorization in terms of execution time, memory usage, and error estimates of the solution.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

The LINPACK Benchmark: past, present and future

TL;DR: Aside from the LINPACK Benchmark suite, the TOP500 and the HPL codes are presented and information is given on how to interpret the results of the benchmark and how the results fit into the performance evaluation process.

Automatic performance tuning of sparse matrix kernels

TL;DR: An automated system to generate highly efficient, platform-adapted implementations of sparse matrix kernels, and extends SPARSITY to support tuning for a variety of common non-zero patterns arising in practice, and for additional kernels like sparse triangular solve (SpTS) and computation of ATA·x and A ρ·x.
Journal ArticleDOI

Recursive Blocked Algorithms and Hybrid Data Structures for Dense Matrix Library Software

TL;DR: Some of the recent advances made by applying the paradigm of recursion to dense matrix computations on today's memory-tiered computer systems are reviewed and details.
Proceedings ArticleDOI

Harnessing GPU tensor cores for fast FP16 arithmetic to speed up mixed-precision iterative refinement solvers

TL;DR: This investigation presents an investigation showing that other high-performance computing (HPC) applications can also harness this power of floating-point arithmetic, and shows how using half-precision Tensor Cores (FP16-TC) for the arithmetic can provide up to 4× speedup.
Journal ArticleDOI

Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting

TL;DR: Quark as mentioned in this paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorisation algorithm with partial pivoting.
References
More filters
Book

Iterative Methods for Sparse Linear Systems

Yousef Saad
TL;DR: This chapter discusses methods related to the normal equations of linear algebra, and some of the techniques used in this chapter were derived from previous chapters of this book.
Journal ArticleDOI

Gaussian elimination is not optimal

TL;DR: In this paper, Cook et al. gave an algorithm which computes the coefficients of the product of two square matrices A and B of order n with less than 4. 7 n l°g 7 arithmetical operations (all logarithms in this paper are for base 2).
Journal ArticleDOI

A set of level 3 basic linear algebra subprograms

TL;DR: This paper describes an extension to the set of Basic Linear Algebra Subprograms targeted at matrix-vector operations that should provide for efficient and portable implementations of algorithms for high-performance computers.
MonographDOI

Direct methods for sparse matrices

TL;DR: This book aims to be suitable also for a student course, probably at MSc level, and the subject is intensely practical and this book is written with practicalities ever in mind.
Related Papers (5)