This work defines and explores the properties of the exchange operator, which maps J-orthogonal matrices to orthogonalMatrices and vice versa, and shows how the exchange operators can be used to obtain a hyperbolic CS decomposition of a J- Orthogonal matrix directly from the usual CS decompositions of an orthogsonal matrix.
Abstract:
A real, square matrix Q is J-orthogonal if QTJQ = J, where the signature matrix $J = \diag(\pm 1)$. J-orthogonal matrices arise in the analysis and numerical solution of various matrix problems involving indefinite inner products, including, in particular, the downdating of Cholesky factorizations. We present techniques and tools useful in the analysis, application, and construction of these matrices, giving a self-contained treatment that provides new insights. First, we define and explore the properties of the exchange operator, which maps J-orthogonal matrices to orthogonal matrices and vice versa. Then we show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. We employ the decomposition to derive an algorithm for constructing random J-orthogonal matrices with specified norm and condition number. We also give a short proof of the fact that J-orthogonal matrices are optimally scaled und...
TL;DR: The focus is on the analogues of singular value and CS (cos -- sin) decompositions for general H-unitary and Lorentz matrices, and on the Analogues of Jordan form, in a suitable basis with certain orthonormality properties, for diagonalizable H- unitary andlorentzMatrices.
TL;DR: In this article, the tensor similar transforation is introduced and the T-Jordan canonical form and its properties are presented. But the tensors are not invertible via the T product, and the results of T-group inverse and T-Drazin inverse are not given.
TL;DR: Three methods for reducing a symmetric indefinite matrix pair, with B nonsingular, to tridiagonal-diagonal form by congruence transformations are described and an optimality condition for the transformations used in the third reduction is proved.
TL;DR: In this paper, two ways of creating isospectral systems: by QR factorisation with a shift, and by using the concept of iso-spectral flow are illustrated by using FEM models.
TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
TL;DR: In this article, the Perturbation of Eigenvalues and Generalized Eigenvalue Problems are studied. But they focus on linear systems and Least Squares problems and do not consider invariant subspaces.
TL;DR: Higham as discussed by the authors gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic, combining algorithmic derivations, perturbation theory, and rounding error analysis.
TL;DR: This volume treats the numerical solution of dense and large-scale eigenvalue problems with an emphasis on algorithms and the theoretical background required to understand them.
TL;DR: Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix.
Q1. What are the contributions in "J-orthogonal matrices: properties and generation higham, nicholas j. 2003" ?
The authors present techniques and tools useful in the analysis, application and construction of these matrices, giving a self-contained treatment that provides new insights. Then the authors show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. The authors introduce the indefinite polar decomposition and investigate two iterations for computing the J-orthogonal polar factor: a Newton iteration involving only matrix inversion and a Schulz iteration involving only matrix multiplication. The authors show that these iterations can be used to J-orthogonalize a matrix that is not too far from being J-orthogonal.
Q2. What is the Newton iteration of a matrix?
Restoring lost orthogonality is a common requirement, for example in numerical solution of matrix differential equations having an orthogonal solution [17], or for computed eigenvector matrices of symmetric matrices.
Q3. What is the simplest way to prove the convergence of Xk+1?
From standard analysis of this iteration (see, e.g., [23]) the authors know that Sk converges quadratically to sign(S0), which is the identity matrix since the spectrum of S0 lies in the open right half-plane.
Q4. What is the simplest way to show that the inverse of the Newton iteration is?
Unlike for orthogonal matrices, for general J-orthogonal matrices ‖Q‖2 can be arbitrarily large and this has implications for the attainable accuracy of the Newton and Schulz iterations in floating point arithmetic.
Q5. What is the way to get the inverse of the matrix?
Such an iteration can be obtained by adapting the Schulz iteration, which exists in variants for computing the matrix inverse [31], the orthogonal polar factor [20], the matrix sign function [22], and the matrix square root [18].