This work defines and explores the properties of the exchange operator, which maps J-orthogonal matrices to orthogonalMatrices and vice versa, and shows how the exchange operators can be used to obtain a hyperbolic CS decomposition of a J- Orthogonal matrix directly from the usual CS decompositions of an orthogsonal matrix.
Abstract:
A real, square matrix Q is J-orthogonal if QTJQ = J, where the signature matrix $J = \diag(\pm 1)$. J-orthogonal matrices arise in the analysis and numerical solution of various matrix problems involving indefinite inner products, including, in particular, the downdating of Cholesky factorizations. We present techniques and tools useful in the analysis, application, and construction of these matrices, giving a self-contained treatment that provides new insights. First, we define and explore the properties of the exchange operator, which maps J-orthogonal matrices to orthogonal matrices and vice versa. Then we show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. We employ the decomposition to derive an algorithm for constructing random J-orthogonal matrices with specified norm and condition number. We also give a short proof of the fact that J-orthogonal matrices are optimally scaled und...
TL;DR: A key feature of this analysis is the identification of two particular classes of scalar products, termed unitary and orthosymmetric, which serve to unify assumptions for the existence of structured factorizations.
TL;DR: In this paper, a generalized tensor function according to the tensor singular value decomposition (T-SVD) is defined, from which the projection operators and Moore-Penrose inverse of tensors are obtained.
TL;DR: It is shown that group structure is preserved precisely when $f(A^{-1}) = f(A)^{- 1}$ for bilinear forms and when $F(A)\in\mathbb{G}$ is the matrix automorphism group associated with a bil inear or sesquilinear form, and meromorphic functions that satisfy each of these conditions are characterized.
TL;DR: In this paper, an extensive and unified collection of structure-preserving transformations for non-degenerate bilinear or sesquilinear forms on R n or C n is presented.
TL;DR: In this paper, a model for simultaneous diagonalization and damping of J-Hermitian matrices is presented, and a spectral decomposition of a general Hermitian matrix is presented.
TL;DR: A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, orthogonality, normality, rank-deficiency and instability.
TL;DR: In this article, a unitarily invariant norm is defined on a matrix of order w with complex coefficients such that: (i) ||.4||^0; (ii) ||4||=0 if and only if 4 = 0; (iii) ||c4|| = |c| ||,4|| for any complex number c; (iv) ||^4 +B\\ g||-4|| +||i*||.
Q1. What are the contributions in "J-orthogonal matrices: properties and generation higham, nicholas j. 2003" ?
The authors present techniques and tools useful in the analysis, application and construction of these matrices, giving a self-contained treatment that provides new insights. Then the authors show how the exchange operator can be used to obtain a hyperbolic CS decomposition of a J-orthogonal matrix directly from the usual CS decomposition of an orthogonal matrix. The authors introduce the indefinite polar decomposition and investigate two iterations for computing the J-orthogonal polar factor: a Newton iteration involving only matrix inversion and a Schulz iteration involving only matrix multiplication. The authors show that these iterations can be used to J-orthogonalize a matrix that is not too far from being J-orthogonal.
Q2. What is the Newton iteration of a matrix?
Restoring lost orthogonality is a common requirement, for example in numerical solution of matrix differential equations having an orthogonal solution [17], or for computed eigenvector matrices of symmetric matrices.
Q3. What is the simplest way to prove the convergence of Xk+1?
From standard analysis of this iteration (see, e.g., [23]) the authors know that Sk converges quadratically to sign(S0), which is the identity matrix since the spectrum of S0 lies in the open right half-plane.
Q4. What is the simplest way to show that the inverse of the Newton iteration is?
Unlike for orthogonal matrices, for general J-orthogonal matrices ‖Q‖2 can be arbitrarily large and this has implications for the attainable accuracy of the Newton and Schulz iterations in floating point arithmetic.
Q5. What is the way to get the inverse of the matrix?
Such an iteration can be obtained by adapting the Schulz iteration, which exists in variants for computing the matrix inverse [31], the orthogonal polar factor [20], the matrix sign function [22], and the matrix square root [18].