J-Orthogonal Matrices: Properties and Generation
read more
Citations
Square-root high-degree cubature Kalman filters for state estimation in nonlinear continuous-discrete stochastic systems
Polynomial Time algorithms for Quadratic Isomorphism of Polynomials
The Nontriviality of Trivial General Covariance: How Electrons Restrict 'Time' Coordinates, Spinors (Almost) Fit into Tensor Calculus, and 7
G-matrices, J-orthogonal matrices, and their sign patterns
Numerical methods for accurate computation of the eigenvalues of Hermitian matrices and the singular values of general matrices
References
Matrix Analysis
Matrix perturbation theory
Accuracy and Stability of Numerical Algorithms
Matrix algorithms
Computing the polar decomposition with applications
Frequently Asked Questions (5)
Q2. What is the Newton iteration of a matrix?
Restoring lost orthogonality is a common requirement, for example in numerical solution of matrix differential equations having an orthogonal solution [17], or for computed eigenvector matrices of symmetric matrices.
Q3. What is the simplest way to prove the convergence of Xk+1?
From standard analysis of this iteration (see, e.g., [23]) the authors know that Sk converges quadratically to sign(S0), which is the identity matrix since the spectrum of S0 lies in the open right half-plane.
Q4. What is the simplest way to show that the inverse of the Newton iteration is?
Unlike for orthogonal matrices, for general J-orthogonal matrices ‖Q‖2 can be arbitrarily large and this has implications for the attainable accuracy of the Newton and Schulz iterations in floating point arithmetic.
Q5. What is the way to get the inverse of the matrix?
Such an iteration can be obtained by adapting the Schulz iteration, which exists in variants for computing the matrix inverse [31], the orthogonal polar factor [20], the matrix sign function [22], and the matrix square root [18].