scispace - formally typeset
Journal ArticleDOI

Iterative Refinement in Floating Point

Cleve B. Moler
- 01 Apr 1967 - 
- Vol. 14, Iss: 2, pp 316-321
Reads0
Chats0
TLDR
Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations with sufficiently high precision if sufficiently high Precision is used.
Abstract
Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.

read more

Citations
More filters
Book

Proximal Algorithms

TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Journal ArticleDOI

Computational methods of linear algebra

TL;DR: A survey of computational methods in linear algebra can be found in this article, where the authors discuss the means and methods of estimating the quality of numerical solution of computational problems, the generalized inverse of a matrix, the solution of systems with rectangular and poorly conditioned matrices, and more traditional questions such as algebraic eigenvalue problems and systems with a square matrix.
Journal ArticleDOI

Accelerating Scientific Computations with Mixed Precision Algorithms

TL;DR: The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor.
Proceedings ArticleDOI

Harnessing GPU tensor cores for fast FP16 arithmetic to speed up mixed-precision iterative refinement solvers

TL;DR: This investigation presents an investigation showing that other high-performance computing (HPC) applications can also harness this power of floating-point arithmetic, and shows how using half-precision Tensor Cores (FP16-TC) for the arithmetic can provide up to 4× speedup.
Journal ArticleDOI

Mixed Precision Iterative Refinement Techniques for the Solution of Dense Linear Systems

TL;DR: By using a combination of 32-bit and 64- bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution.
References
More filters
Journal ArticleDOI

Solution of real and complex systems of linear equations

TL;DR: In this paper, a non-singular matrix can be factorized in the form A = LU, where L is lower-triangular and U is uppertriangular, and the factorization, when it exists, is unique to within a nonsingular diagonal multiplying factor.
Book ChapterDOI

Iterative Refinement of the Solution of a Positive Definite System of Equations

TL;DR: If A is ill-conditioned the computed solution may not be sufficiently accurate, but (provided A is not almost singular to working accuracy) it may be improved by an iterative procedure in which the Cholesky decomposition is used repeatedly.