This work defines and explores the properties of the exchange operator, which maps J-orthogonal matrices to orthogonalMatrices and vice versa, and shows how the exchange operators can be used to obtain a hyperbolic CS decomposition of a J- Orthogonal matrix directly from the usual CS decompositions of an orthogsonal matrix.
Abstract:
A real, square matrix Q is J-orthogonal if QTJQ = J, where the signature matrix $J = \diag(\pm 1)$. J-orthogonal matrices arise in the analysis and numerical solution of various matrix problems involving indefinite inner products, including, in particular, the downdating of Cholesky factorizations. We present techniques and tools useful in the analysis, application, and construction of these matrices, giving a self-contained treatment that provides new insights. First, we define and explore the properties of the exchange operator, which maps J-orthogonal matrices to orthogonal matrices and vice versa. Then we show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. We employ the decomposition to derive an algorithm for constructing random J-orthogonal matrices with specified norm and condition number. We also give a short proof of the fact that J-orthogonal matrices are optimally scaled und...
TL;DR: The indefinite state of Lanczos’ method for solving the eigenvalue problems is scrutinized and it is shown that this method for the -Hermitian matrices works much better than Arnoldi’s method.
TL;DR: In this article, the explicit expression of the projected condition number of the equality constrained indefinite least squares problem is given, and some new compact forms or upper bounds of projected condition numbers are given to improve the computational efficiency.
TL;DR: In this article , the authors complete Dyson's dream by cementing the links between symmetric spaces and classical random matrix ensembles through the use of alternative coordinate systems, such as generalized Cartan decompositions.
TL;DR: The paper addresses the problem of decentralization of the measurement data processing based on matrix J-orthogonal transformations in the square-root information Kalman filter with results of numerical experiments confirm the efficiency of the proposed solution.
TL;DR: In this article, a systematic approach inspired by the generalized Cartan decomposition of Lie theory is proposed to catalog 53 matrix factorizations, most of which are believed to be new.
TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
TL;DR: In this article, the Perturbation of Eigenvalues and Generalized Eigenvalue Problems are studied. But they focus on linear systems and Least Squares problems and do not consider invariant subspaces.
TL;DR: Higham as discussed by the authors gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic, combining algorithmic derivations, perturbation theory, and rounding error analysis.
TL;DR: This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them.
TL;DR: Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix.
Q1. What are the contributions in "J-orthogonal matrices: properties and generation higham, nicholas j. 2003" ?
The authors present techniques and tools useful in the analysis, application and construction of these matrices, giving a self-contained treatment that provides new insights. Then the authors show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. The authors introduce the indefinite polar decomposition and investigate two iterations for computing the J-orthogonal polar factor: a Newton iteration involving only matrix inversion and a Schulz iteration involving only matrix multiplication. The authors show that these iterations can be used to J-orthogonalize a matrix that is not too far from being J-orthogonal.
Q2. What is the Newton iteration of a matrix?
Restoring lost orthogonality is a common requirement, for example in numerical solution of matrix differential equations having an orthogonal solution [17], or for computed eigenvector matrices of symmetric matrices.
Q3. What is the simplest way to prove the convergence of Xk+1?
From standard analysis of this iteration (see, e.g., [23]) the authors know that Sk converges quadratically to sign(S0), which is the identity matrix since the spectrum of S0 lies in the open right half-plane.
Q4. What is the simplest way to show that the inverse of the Newton iteration is?
Unlike for orthogonal matrices, for general J-orthogonal matrices ‖Q‖2 can be arbitrarily large and this has implications for the attainable accuracy of the Newton and Schulz iterations in floating point arithmetic.
Q5. What is the way to get the inverse of the matrix?
Such an iteration can be obtained by adapting the Schulz iteration, which exists in variants for computing the matrix inverse [31], the orthogonal polar factor [20], the matrix sign function [22], and the matrix square root [18].