scispace - formally typeset
Journal ArticleDOI

Perturbation bounds in connection with singular value decomposition

Per-Åke Wedin
- 01 Mar 1972 - 
- Vol. 12, Iss: 1, pp 99-111
TLDR
The sin ϑ theorem for Hermitian linear operators in Davis and Kahan as discussed by the authors is applicable to computational solution of overdetermined systems of linear equations and especially cover the rank deficient case when the matrix is replaced by one of lower rank.
Abstract
LetA be anm ×n-matrix which is slightly perturbed. In this paper we will derive an estimate of how much the invariant subspaces ofA H A andAA H will then be affected. These bounds have the sin ϑ theorem for Hermitian linear operators in Davis and Kahan [1] as a special case. They are applicable to computational solution of overdetermined systems of linear equations and especially cover the rank deficient case when the matrix is replaced by one of lower rank.

read more

Citations
More filters
Book

High-Dimensional Probability: An Introduction with Applications in Data Science

TL;DR: A broad range of illustrations is embedded throughout, including classical and modern results for covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, machine learning, compressed sensing, and sparse regression.
Journal ArticleDOI

The differentiation of pseudoinverses and nonlinear least squares problems whose variables separate.

TL;DR: Algorithms are presented which make extensive use of well-known reliable linear least squares techniques, and numerical results and comparisons are given.

The differentiation of pseudo-inverses and non-linear least squares problems whose variables separate.

G. H. GOLUBf, +1 more
TL;DR: In this paper, the least square fit of nonlinear models of the form {(0t, Yi), l,, m, qgj, ti, and the modified functional r2( 0t (lY O(0 t)/(0)yl)22) is considered.
Posted Content

Tensor decompositions for learning latent variable models

TL;DR: A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices, and implies a robust and computationally tractable estimation approach for several popular latent variable models.
Journal ArticleDOI

Tensor decompositions for learning latent variable models

TL;DR: In this article, the authors consider a wide class of latent variable models, including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation, which exploit a certain tensor structure in their low-order observable moments (typically, of second and third-order).
References
More filters
Book

Perturbation theory for linear operators

Tosio Kato
TL;DR: The monograph by T Kato as discussed by the authors is an excellent reference work in the theory of linear operators in Banach and Hilbert spaces and is a thoroughly worthwhile reference work both for graduate students in functional analysis as well as for researchers in perturbation, spectral, and scattering theory.
Journal ArticleDOI

The Rotation of Eigenvectors by a Perturbation. III

TL;DR: In this article, the difference between the two subspaces is characterized in terms of certain angles through which one subspace must be rotated in order most directly to reach the other, and Sharp bounds upon trigonometric functions of these angles are obtained from the gap and from bounds upon either the perturbation or a computable residual.
Journal ArticleDOI

Perturbation bounds for means of eigenvalues and invariant subspaces

TL;DR: In this article, the authors derived bounds for computed bases of subspaces of eigenvectors and principal vectors, relating them to the spaces spanned by the last singular vectors of corresponding powers of the matrix.
Journal ArticleDOI

Ill-conditioned systems of linear algebraic equations

TL;DR: In this article, a method called the method of false perturbations is proposed for the solution of ill-conditioned systems, which gives the solution x of equation (1) in the form of an orthogonal sum x(l) + xc2, where the term x (l) is extremely sensitive both to inherent (ineradicable according to the terminology of [l]) errors as well as to errors of rounding off, while the second term has no such sensitivity.