J-Orthogonal Matrices: Properties and Generation
read more
Citations
The hyperbolic elimination method for solving the equality constrained indefinite least squares problem
Hyperbolic-SVD-Based Square-Root Unscented Kalman Filters in Continuous-Discrete Target Tracking Scenarios
A note on the hyperbolic singular value decomposition without hyperexchange matrices
Cholesky-Like Factorization of Symmetric Indefinite Matrices and Orthogonalization with Respect to Bilinear Forms
MATLAB Toolbox for Classical Matrix Groups
References
Matrix Analysis
Matrix perturbation theory
Accuracy and Stability of Numerical Algorithms
Matrix algorithms
Computing the polar decomposition with applications
Frequently Asked Questions (5)
Q2. What is the Newton iteration of a matrix?
Restoring lost orthogonality is a common requirement, for example in numerical solution of matrix differential equations having an orthogonal solution [17], or for computed eigenvector matrices of symmetric matrices.
Q3. What is the simplest way to prove the convergence of Xk+1?
From standard analysis of this iteration (see, e.g., [23]) the authors know that Sk converges quadratically to sign(S0), which is the identity matrix since the spectrum of S0 lies in the open right half-plane.
Q4. What is the simplest way to show that the inverse of the Newton iteration is?
Unlike for orthogonal matrices, for general J-orthogonal matrices ‖Q‖2 can be arbitrarily large and this has implications for the attainable accuracy of the Newton and Schulz iterations in floating point arithmetic.
Q5. What is the way to get the inverse of the matrix?
Such an iteration can be obtained by adapting the Schulz iteration, which exists in variants for computing the matrix inverse [31], the orthogonal polar factor [20], the matrix sign function [22], and the matrix square root [18].