scispace - formally typeset
Open AccessJournal ArticleDOI

Overview of total least-squares methods

Reads0
Chats0
TLDR
It is explained how special structure of the weight matrix and the data matrix can be exploited for efficient cost function and first derivative computation that allows to obtain computationally efficient solution methods.
About
This article is published in Signal Processing.The article was published on 2007-10-01 and is currently open access. It has received 745 citations till now. The article focuses on the topics: Low-rank approximation & Singular value decomposition.

read more

Citations
More filters
Journal ArticleDOI

Spatially Sparse Precoding in Millimeter Wave MIMO Systems

TL;DR: This paper considers transmit precoding and receiver combining in mmWave systems with large antenna arrays and develops algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware.
Journal ArticleDOI

The Supernova Legacy Survey 3-year sample: Type Ia supernovae photometric distances and cosmological constraints ,

TL;DR: In this paper, photometric properties and distance measurements of 252 high redshift Type Ia supernovae (0.15 < z < 1.1) were presented and their multi-colour light curves measured using the MegaPrime/MegaCam instrument at the Canada-France-Hawaii Telescope (CFHT).
Journal ArticleDOI

Solving inverse problems using data-driven models

TL;DR: This survey paper aims to give an account of some of the main contributions in data-driven inverse problems.

Some modified matrix eigenvalue problems.

Gene H. Golub
TL;DR: This work considers the numerical calculation of several matrix eigenvalue problems which require some manipulation before the standard algorithms may be used, and studies several eigen value problems which arise in least squares.
Journal ArticleDOI

Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

TL;DR: Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays, as well as formulating and solving (regularized) TLS optimization problems under sparsity constraints.
References
More filters
Book

Matrix computations

Gene H. Golub
Journal ArticleDOI

LIII. On lines and planes of closest fit to systems of points in space

TL;DR: This paper is concerned with the construction of planes of closest fit to systems of points in space and the relationships between these planes and the planes themselves.
Journal ArticleDOI

The approximation of one matrix by another of lower rank

TL;DR: In this paper, the problem of approximating one matrix by another of lower rank is formulated as a least-squares problem, and the normal equations cannot be immediately written down, since the elements of the approximate matrix are not independent of one another.
Book

Measurement Error Models

TL;DR: In this paper, the authors provide a complete treatment of an important and frequently ignored topic, namely measurement error models, including regression models with errors in the variables, latent variable models, and factor models.
Journal ArticleDOI

An Analysis of the Total Least Squares Problem

TL;DR: In this article, a singular value decomposition analysis of the TLS problem is presented, which provides a measure of the underlying problem's sensitivity and its relationship to ordinary least squares regression.
Frequently Asked Questions (16)
Q1. What have the authors contributed in "Overview of total least squares methods" ?

The authors review the development and extensions of the classical total least squares method and describe algorithms for its generalization to weighted and structured approximation problems. The authors explain how special structure of the weight matrix and the data matrix can be exploited for efficient cost function and first derivative computation. The authors describe the applications for deconvolution, linear prediction, and errors-in-variables system identification. 

The input/output representation is a linear system of equations AX = B, which is the classical way of addressing approximation problems. 

Total least squares is applied in computer vision [58], image reconstruction [65, 54, 22], speech and audio processing [39, 29], modal and spectral analysis [89, 93], linear system theory [14, 13], system identification [66, 37, 63, 52], and astronomy [8]. 

The special case when the weight matrix W is diagonal is called element-wise weighted total least squares (element-wise weighted total least squares). 

The least squares approximation X̂ls is obtained as a solution of the optimization problem{X̂ls,∆Bls} := arg min X ,∆B ‖∆B‖F subject to AX = B + ∆B. (LS)The rationale behind this approximation method is to correct the right-hand side B as little as possible in the Frobenius norm sense, so that the corrected system of equations AX = B̂, B̂ := B+∆B has an exact solution. 

The least squares approximation is statistically motivated as a maximum likelihood estimator in a linear regression model under standard assumptions (zero mean, normally distributed residual with a covariance matrix that is a multiple of the identity). 

In addition, various types of bounded uncertainties have been proposed in order to improve robustness of the estimators under various noise conditions [18, 11]. 

More general problem formulations, such as restricted total least squares [88], which also allow the incorporation of equality constraints, have been proposed, as well as total least squares problem formulations using ℓp norms in the cost function. 

(WTLS)The motivation for considering the weighted total least squares problem is that it defines the maximum likelihood estimator for the errors-in-variables model when the measurement noise C̃ = [ Ã B̃ ] is zero mean, normally distributed, with a covariance matrix cov ( vec(C̃⊤) ) = σ 2W−1, (∗∗)i.e., the weight matrix W is up to a scaling factor σ 2 the inverse of the measurement noise covariance matrix. 

The mixed least squares-total least squares problem formulation allows to extend consistency of the total least squares estimator in errors-in-variables models, where some of the variables are measured without error. 

In fact, generically, any splitting of the variables into a group of d variables (outputs) and a group of remaining variables (inputs), defines a valid input/output partitioning. 

The minimal number of independent linear equations necessary to define a linear static model B is d, i.e., in a minimal representation B = ker(R) with rowdim(R) = d. 

Robustness of the total least squares solution is also improved by adding regularization, resulting in regularized total least squares methods [20, 26, 74, 73, 7]. 

The Riemannian singular value decomposition framework of De Moor [12] is derived for the structured total least squaresproblem but includes the weighted total least squares problem with diagonal weight matrix and d = 1 as a special case. 

The best weighted total least squares approximation of C in B isĉwtls,i = P(P ⊤WiP) −1P⊤Wici, for i = 1, . . . ,mwith the corresponding misfitMwtls ( C,colspan(P) ) =√ m∑ i=1c⊤i Wi ( The author−P(P⊤WiP)−1P⊤Wi ) ci. (MwtlsP)The remaining problem—the minimization with respect to the model parameters is a nonconvex optimization problem that in general has no closed form solution. 

The basic and generalized total least squares problems have an analytic solution in terms of the singular value decomposition of the data matrix, which allows fast and reliable computation of the solution.