Q2. What is the simplest way to address approximation problems?
The input/output representation is a linear system of equations AX = B, which is the classical way of addressing approximation problems.
Q3. What is the common application of total least squares?
Total least squares is applied in computer vision [58], image reconstruction [65, 54, 22], speech and audio processing [39, 29], modal and spectral analysis [89, 93], linear system theory [14, 13], system identification [66, 37, 63, 52], and astronomy [8].
Q4. What is the special case of the weighted total least squares problem?
The special case when the weight matrix W is diagonal is called element-wise weighted total least squares (element-wise weighted total least squares).
Q5. What is the rationale behind the total least squares method?
The least squares approximation X̂ls is obtained as a solution of the optimization problem{X̂ls,∆Bls} := arg min X ,∆B ‖∆B‖F subject to AX = B + ∆B. (LS)The rationale behind this approximation method is to correct the right-hand side B as little as possible in the Frobenius norm sense, so that the corrected system of equations AX = B̂, B̂ := B+∆B has an exact solution.
Q6. What is the definition of the least squares approximation method?
The least squares approximation is statistically motivated as a maximum likelihood estimator in a linear regression model under standard assumptions (zero mean, normally distributed residual with a covariance matrix that is a multiple of the identity).
Q7. What type of uncertainty has been proposed to improve the reliability of the total least squares estimator?
In addition, various types of bounded uncertainties have been proposed in order to improve robustness of the estimators under various noise conditions [18, 11].
Q8. What generalized total least squares problem formulations have been proposed?
More general problem formulations, such as restricted total least squares [88], which also allow the incorporation of equality constraints, have been proposed, as well as total least squares problem formulations using ℓp norms in the cost function.
Q9. What is the motivation for considering the weighted total least squares problem?
(WTLS)The motivation for considering the weighted total least squares problem is that it defines the maximum likelihood estimator for the errors-in-variables model when the measurement noise C̃ = [ Ã B̃ ] is zero mean, normally distributed, with a covariance matrix cov ( vec(C̃⊤) ) = σ 2W−1, (∗∗)i.e., the weight matrix W is up to a scaling factor σ 2 the inverse of the measurement noise covariance matrix.
Q10. What is the way to extend the consistency of the total least squares estimator?
The mixed least squares-total least squares problem formulation allows to extend consistency of the total least squares estimator in errors-in-variables models, where some of the variables are measured without error.
Q11. What is the definition of a minimal input/output partitioning?
In fact, generically, any splitting of the variables into a group of d variables (outputs) and a group of remaining variables (inputs), defines a valid input/output partitioning.
Q12. What is the minimum number of independent linear equations necessary to define a linear static model?
The minimal number of independent linear equations necessary to define a linear static model B is d, i.e., in a minimal representation B = ker(R) with rowdim(R) = d.
Q13. What is the way to improve the total least squares estimator?
Robustness of the total least squares solution is also improved by adding regularization, resulting in regularized total least squares methods [20, 26, 74, 73, 7].
Q14. What is the Riemannian singular value decomposition framework for the weighted total?
The Riemannian singular value decomposition framework of De Moor [12] is derived for the structured total least squaresproblem but includes the weighted total least squares problem with diagonal weight matrix and d = 1 as a special case.
Q15. What is the weighted total least squares approximation of C in B?
The best weighted total least squares approximation of C in B isĉwtls,i = P(P ⊤WiP) −1P⊤Wici, for i = 1, . . . ,mwith the corresponding misfitMwtls ( C,colspan(P) ) =√ m∑ i=1c⊤i Wi ( The author−P(P⊤WiP)−1P⊤Wi ) ci. (MwtlsP)The remaining problem—the minimization with respect to the model parameters is a nonconvex optimization problem that in general has no closed form solution.
Q16. What is the difference between the basic and generalized total least squares problems?
The basic and generalized total least squares problems have an analytic solution in terms of the singular value decomposition of the data matrix, which allows fast and reliable computation of the solution.