scispace - formally typeset
Search or ask a question
Book

Iterative Solution Methods

05 Aug 2012-
TL;DR: This paper presents a meta-analyses of matrix eigenvalues and condition numbers for preconditional matrices using the framework of the Perron-Frobenius theory for nonnegative matrices and some simple iterative methods.
Abstract: Preface Acknowledgements 1. Direct solution methods 2. Theory of matrix eigenvalues 3. Positive definite matrices, Schur complements, and generalized eigenvalue problems 4. Reducible and irreducible matrices and the Perron-Frobenius theory for nonnegative matrices 5. Basic iterative methods and their rates of convergence 6. M-matrices, convergent splittings, and the SOR method 7. Incomplete factorization preconditioning methods 8. Approximate matrix inverses and corresponding preconditioning methods 9. Block diagonal and Schur complement preconditionings 10. Estimates of eigenvalues and condition numbers for preconditional matrices 11. Conjugate gradient and Lanczos-type methods 12. Generalized conjugate gradient methods 13. The rate of convergence of the conjugate gradient method Appendices.
Citations
More filters
Book
01 Apr 2003
TL;DR: This chapter discusses methods related to the normal equations of linear algebra, and some of the techniques used in this chapter were derived from previous chapters of this book.
Abstract: Preface 1. Background in linear algebra 2. Discretization of partial differential equations 3. Sparse matrices 4. Basic iterative methods 5. Projection methods 6. Krylov subspace methods Part I 7. Krylov subspace methods Part II 8. Methods related to the normal equations 9. Preconditioned iterations 10. Preconditioning techniques 11. Parallel implementations 12. Parallel preconditioners 13. Multigrid methods 14. Domain decomposition methods Bibliography Index.

13,484 citations

Book
23 Dec 2007
TL;DR: Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis and will be of interest to applied mathematicians, engineers, and computer scientists.
Abstract: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.

2,586 citations

Book
28 Jun 2004
TL;DR: A tutorial on random matrices is provided which provides an overview of the theory and brings together in one source the most significant results recently obtained.
Abstract: Random matrix theory has found many applications in physics, statistics and engineering since its inception. Although early developments were motivated by practical experimental problems, random matrices are now used in fields as diverse as Riemann hypothesis, stochastic differential equations, condensed matter physics, statistical physics, chaotic systems, numerical linear algebra, neural networks, multivariate statistics, information theory, signal processing and small-world networks. This article provides a tutorial on random matrices which provides an overview of the theory and brings together in one source the most significant results recently obtained. Furthermore, the application of random matrix theory to the fundamental limits of wireless communication channels is described in depth.

2,308 citations

Journal ArticleDOI
TL;DR: A large selection of solution methods for linear systems in saddle point form are presented, with an emphasis on iterative methods for large and sparse problems.
Abstract: Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for this type of system. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.

2,253 citations

Journal ArticleDOI
TL;DR: This paper introduces two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems, and introduces a monotonic version of TwIST (MTwIST); although the convergence proof does not apply, the effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.
Abstract: Iterative shrinkage/thresholding (1ST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these 1ST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (lscrP norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.

1,870 citations